2025-09-04 00:00:11.411790 | Job console starting 2025-09-04 00:00:11.422746 | Updating git repos 2025-09-04 00:00:11.646209 | Cloning repos into workspace 2025-09-04 00:00:11.829741 | Restoring repo states 2025-09-04 00:00:11.855388 | Merging changes 2025-09-04 00:00:11.855408 | Checking out repos 2025-09-04 00:00:12.296295 | Preparing playbooks 2025-09-04 00:00:13.055473 | Running Ansible setup 2025-09-04 00:00:18.131080 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-04 00:00:20.178937 | 2025-09-04 00:00:20.179102 | PLAY [Base pre] 2025-09-04 00:00:20.224040 | 2025-09-04 00:00:20.224185 | TASK [Setup log path fact] 2025-09-04 00:00:20.300975 | orchestrator | ok 2025-09-04 00:00:20.344113 | 2025-09-04 00:00:20.344253 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-04 00:00:20.404877 | orchestrator | ok 2025-09-04 00:00:20.437846 | 2025-09-04 00:00:20.437969 | TASK [emit-job-header : Print job information] 2025-09-04 00:00:20.519833 | # Job Information 2025-09-04 00:00:20.519991 | Ansible Version: 2.16.14 2025-09-04 00:00:20.520025 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-04 00:00:20.520058 | Pipeline: periodic-midnight 2025-09-04 00:00:20.520080 | Executor: 521e9411259a 2025-09-04 00:00:20.520101 | Triggered by: https://github.com/osism/testbed 2025-09-04 00:00:20.520136 | Event ID: 8fbc56587c674d7d88f4d95b035d40bd 2025-09-04 00:00:20.526592 | 2025-09-04 00:00:20.539170 | LOOP [emit-job-header : Print node information] 2025-09-04 00:00:20.802748 | orchestrator | ok: 2025-09-04 00:00:20.807080 | orchestrator | # Node Information 2025-09-04 00:00:20.807143 | orchestrator | Inventory Hostname: orchestrator 2025-09-04 00:00:20.807173 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-04 00:00:20.807197 | orchestrator | Username: zuul-testbed03 2025-09-04 00:00:20.807219 | orchestrator | Distro: Debian 12.11 2025-09-04 00:00:20.807243 | orchestrator | Provider: static-testbed 2025-09-04 00:00:20.807266 | orchestrator | Region: 2025-09-04 00:00:20.807287 | orchestrator | Label: testbed-orchestrator 2025-09-04 00:00:20.807307 | orchestrator | Product Name: OpenStack Nova 2025-09-04 00:00:20.807327 | orchestrator | Interface IP: 81.163.193.140 2025-09-04 00:00:20.836218 | 2025-09-04 00:00:20.836347 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-04 00:00:22.238603 | orchestrator -> localhost | changed 2025-09-04 00:00:22.245198 | 2025-09-04 00:00:22.245292 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-04 00:00:25.355670 | orchestrator -> localhost | changed 2025-09-04 00:00:25.369157 | 2025-09-04 00:00:25.369255 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-04 00:00:25.908011 | orchestrator -> localhost | ok 2025-09-04 00:00:25.913975 | 2025-09-04 00:00:25.914071 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-04 00:00:25.955551 | orchestrator | ok 2025-09-04 00:00:25.981241 | orchestrator | included: /var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-04 00:00:25.996135 | 2025-09-04 00:00:25.996223 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-04 00:00:28.040283 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-04 00:00:28.040510 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/61431fb139e44046ab6d513fa6ea3210_id_rsa 2025-09-04 00:00:28.040546 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/61431fb139e44046ab6d513fa6ea3210_id_rsa.pub 2025-09-04 00:00:28.040570 | orchestrator -> localhost | The key fingerprint is: 2025-09-04 00:00:28.040593 | orchestrator -> localhost | SHA256:LX4O9l29ErcRz4Ber22ioN3gNlBf05A5Wc8GeYnJmDo zuul-build-sshkey 2025-09-04 00:00:28.040612 | orchestrator -> localhost | The key's randomart image is: 2025-09-04 00:00:28.040638 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-04 00:00:28.040658 | orchestrator -> localhost | | + +*o| 2025-09-04 00:00:28.040676 | orchestrator -> localhost | | o +B+o| 2025-09-04 00:00:28.040693 | orchestrator -> localhost | | . . =+| 2025-09-04 00:00:28.040709 | orchestrator -> localhost | | E. . =o.| 2025-09-04 00:00:28.040726 | orchestrator -> localhost | | S.oo o =o| 2025-09-04 00:00:28.040745 | orchestrator -> localhost | | ... o. ++| 2025-09-04 00:00:28.040761 | orchestrator -> localhost | | +.+ +o+| 2025-09-04 00:00:28.040778 | orchestrator -> localhost | | . Oo= oo.+| 2025-09-04 00:00:28.040796 | orchestrator -> localhost | | ..=.+..+ | 2025-09-04 00:00:28.040812 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-04 00:00:28.040853 | orchestrator -> localhost | ok: Runtime: 0:00:01.116474 2025-09-04 00:00:28.046710 | 2025-09-04 00:00:28.046792 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-04 00:00:28.074512 | orchestrator | ok 2025-09-04 00:00:28.089208 | orchestrator | included: /var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-04 00:00:28.120269 | 2025-09-04 00:00:28.120374 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-04 00:00:28.155243 | orchestrator | skipping: Conditional result was False 2025-09-04 00:00:28.162566 | 2025-09-04 00:00:28.162663 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-04 00:00:29.136050 | orchestrator | changed 2025-09-04 00:00:29.184952 | 2025-09-04 00:00:29.185084 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-04 00:00:29.486298 | orchestrator | ok 2025-09-04 00:00:29.493267 | 2025-09-04 00:00:29.493952 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-04 00:00:29.992563 | orchestrator | ok 2025-09-04 00:00:30.008174 | 2025-09-04 00:00:30.008270 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-04 00:00:30.489931 | orchestrator | ok 2025-09-04 00:00:30.520729 | 2025-09-04 00:00:30.520821 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-04 00:00:30.553893 | orchestrator | skipping: Conditional result was False 2025-09-04 00:00:30.561077 | 2025-09-04 00:00:30.561184 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-04 00:00:31.220185 | orchestrator -> localhost | changed 2025-09-04 00:00:31.231253 | 2025-09-04 00:00:31.231352 | TASK [add-build-sshkey : Add back temp key] 2025-09-04 00:00:31.964042 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/61431fb139e44046ab6d513fa6ea3210_id_rsa (zuul-build-sshkey) 2025-09-04 00:00:31.964222 | orchestrator -> localhost | ok: Runtime: 0:00:00.026341 2025-09-04 00:00:31.975944 | 2025-09-04 00:00:31.976035 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-04 00:00:32.609910 | orchestrator | ok 2025-09-04 00:00:32.621578 | 2025-09-04 00:00:32.621686 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-04 00:00:32.669970 | orchestrator | skipping: Conditional result was False 2025-09-04 00:00:32.783805 | 2025-09-04 00:00:32.784043 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-04 00:00:33.291272 | orchestrator | ok 2025-09-04 00:00:33.306459 | 2025-09-04 00:00:33.306555 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-04 00:00:33.368784 | orchestrator | ok 2025-09-04 00:00:33.379147 | 2025-09-04 00:00:33.379242 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-04 00:00:34.139685 | orchestrator -> localhost | ok 2025-09-04 00:00:34.145692 | 2025-09-04 00:00:34.145773 | TASK [validate-host : Collect information about the host] 2025-09-04 00:00:35.773641 | orchestrator | ok 2025-09-04 00:00:35.807320 | 2025-09-04 00:00:35.807441 | TASK [validate-host : Sanitize hostname] 2025-09-04 00:00:35.857066 | orchestrator | ok 2025-09-04 00:00:35.861378 | 2025-09-04 00:00:35.861473 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-04 00:00:37.137063 | orchestrator -> localhost | changed 2025-09-04 00:00:37.143158 | 2025-09-04 00:00:37.143255 | TASK [validate-host : Collect information about zuul worker] 2025-09-04 00:00:37.705475 | orchestrator | ok 2025-09-04 00:00:37.710499 | 2025-09-04 00:00:37.710595 | TASK [validate-host : Write out all zuul information for each host] 2025-09-04 00:00:38.925532 | orchestrator -> localhost | changed 2025-09-04 00:00:38.939560 | 2025-09-04 00:00:38.939674 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-04 00:00:39.445716 | orchestrator | ok 2025-09-04 00:00:39.455449 | 2025-09-04 00:00:39.455555 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-04 00:01:19.459278 | orchestrator | changed: 2025-09-04 00:01:19.459463 | orchestrator | .d..t...... src/ 2025-09-04 00:01:19.459499 | orchestrator | .d..t...... src/github.com/ 2025-09-04 00:01:19.459523 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-04 00:01:19.459544 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-04 00:01:19.459565 | orchestrator | RedHat.yml 2025-09-04 00:01:19.473974 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-04 00:01:19.473992 | orchestrator | RedHat.yml 2025-09-04 00:01:19.474044 | orchestrator | = 1.53.0"... 2025-09-04 00:01:31.696810 | orchestrator | 00:01:31.696 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-04 00:01:32.045095 | orchestrator | 00:01:32.044 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-04 00:01:32.704113 | orchestrator | 00:01:32.703 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-04 00:01:33.329790 | orchestrator | 00:01:33.329 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-04 00:01:34.164679 | orchestrator | 00:01:34.164 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-04 00:01:34.235581 | orchestrator | 00:01:34.235 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-04 00:01:34.711308 | orchestrator | 00:01:34.711 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-04 00:01:34.711419 | orchestrator | 00:01:34.711 STDOUT terraform: Providers are signed by their developers. 2025-09-04 00:01:34.711429 | orchestrator | 00:01:34.711 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-04 00:01:34.711436 | orchestrator | 00:01:34.711 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-04 00:01:34.711444 | orchestrator | 00:01:34.711 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-04 00:01:34.711512 | orchestrator | 00:01:34.711 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-04 00:01:34.711545 | orchestrator | 00:01:34.711 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-04 00:01:34.711571 | orchestrator | 00:01:34.711 STDOUT terraform: you run "tofu init" in the future. 2025-09-04 00:01:34.711821 | orchestrator | 00:01:34.711 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-04 00:01:34.711832 | orchestrator | 00:01:34.711 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-04 00:01:34.711839 | orchestrator | 00:01:34.711 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-04 00:01:34.711860 | orchestrator | 00:01:34.711 STDOUT terraform: should now work. 2025-09-04 00:01:34.711915 | orchestrator | 00:01:34.711 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-04 00:01:34.711967 | orchestrator | 00:01:34.711 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-04 00:01:34.712012 | orchestrator | 00:01:34.711 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-04 00:01:34.812741 | orchestrator | 00:01:34.812 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-04 00:01:34.812835 | orchestrator | 00:01:34.812 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-04 00:01:35.006116 | orchestrator | 00:01:35.005 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-04 00:01:35.006190 | orchestrator | 00:01:35.005 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-04 00:01:35.006208 | orchestrator | 00:01:35.006 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-04 00:01:35.006214 | orchestrator | 00:01:35.006 STDOUT terraform: for this configuration. 2025-09-04 00:01:35.150365 | orchestrator | 00:01:35.150 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-04 00:01:35.150421 | orchestrator | 00:01:35.150 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-04 00:01:35.239744 | orchestrator | 00:01:35.239 STDOUT terraform: ci.auto.tfvars 2025-09-04 00:01:35.241902 | orchestrator | 00:01:35.241 STDOUT terraform: default_custom.tf 2025-09-04 00:01:35.374997 | orchestrator | 00:01:35.374 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-09-04 00:01:36.198043 | orchestrator | 00:01:36.196 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-04 00:01:36.731576 | orchestrator | 00:01:36.731 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-04 00:01:37.038086 | orchestrator | 00:01:37.031 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-04 00:01:37.038155 | orchestrator | 00:01:37.031 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-04 00:01:37.038162 | orchestrator | 00:01:37.031 STDOUT terraform:  + create 2025-09-04 00:01:37.038168 | orchestrator | 00:01:37.031 STDOUT terraform:  <= read (data resources) 2025-09-04 00:01:37.038173 | orchestrator | 00:01:37.031 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-04 00:01:37.038177 | orchestrator | 00:01:37.031 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-04 00:01:37.038181 | orchestrator | 00:01:37.031 STDOUT terraform:  # (config refers to values not yet known) 2025-09-04 00:01:37.038186 | orchestrator | 00:01:37.031 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-04 00:01:37.038190 | orchestrator | 00:01:37.031 STDOUT terraform:  + checksum = (known after apply) 2025-09-04 00:01:37.038194 | orchestrator | 00:01:37.031 STDOUT terraform:  + created_at = (known after apply) 2025-09-04 00:01:37.038198 | orchestrator | 00:01:37.031 STDOUT terraform:  + file = (known after apply) 2025-09-04 00:01:37.038202 | orchestrator | 00:01:37.031 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038206 | orchestrator | 00:01:37.031 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038226 | orchestrator | 00:01:37.031 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-04 00:01:37.038230 | orchestrator | 00:01:37.031 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-04 00:01:37.038234 | orchestrator | 00:01:37.031 STDOUT terraform:  + most_recent = true 2025-09-04 00:01:37.038238 | orchestrator | 00:01:37.031 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.038241 | orchestrator | 00:01:37.031 STDOUT terraform:  + protected = (known after apply) 2025-09-04 00:01:37.038245 | orchestrator | 00:01:37.031 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038249 | orchestrator | 00:01:37.032 STDOUT terraform:  + schema = (known after apply) 2025-09-04 00:01:37.038253 | orchestrator | 00:01:37.032 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-04 00:01:37.038257 | orchestrator | 00:01:37.032 STDOUT terraform:  + tags = (known after apply) 2025-09-04 00:01:37.038261 | orchestrator | 00:01:37.032 STDOUT terraform:  + updated_at = (known after apply) 2025-09-04 00:01:37.038265 | orchestrator | 00:01:37.032 STDOUT terraform:  } 2025-09-04 00:01:37.038272 | orchestrator | 00:01:37.032 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-04 00:01:37.038276 | orchestrator | 00:01:37.032 STDOUT terraform:  # (config refers to values not yet known) 2025-09-04 00:01:37.038279 | orchestrator | 00:01:37.032 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-04 00:01:37.038283 | orchestrator | 00:01:37.032 STDOUT terraform:  + checksum = (known after apply) 2025-09-04 00:01:37.038287 | orchestrator | 00:01:37.032 STDOUT terraform:  + created_at = (known after apply) 2025-09-04 00:01:37.038291 | orchestrator | 00:01:37.032 STDOUT terraform:  + file = (known after apply) 2025-09-04 00:01:37.038294 | orchestrator | 00:01:37.032 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038298 | orchestrator | 00:01:37.032 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038302 | orchestrator | 00:01:37.032 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-04 00:01:37.038306 | orchestrator | 00:01:37.032 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-04 00:01:37.038315 | orchestrator | 00:01:37.032 STDOUT terraform:  + most_recent = true 2025-09-04 00:01:37.038319 | orchestrator | 00:01:37.032 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.038323 | orchestrator | 00:01:37.032 STDOUT terraform:  + protected = (known after apply) 2025-09-04 00:01:37.038327 | orchestrator | 00:01:37.032 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038364 | orchestrator | 00:01:37.032 STDOUT terraform:  + schema = (known after apply) 2025-09-04 00:01:37.038369 | orchestrator | 00:01:37.032 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-04 00:01:37.038373 | orchestrator | 00:01:37.032 STDOUT terraform:  + tags = (known after apply) 2025-09-04 00:01:37.038377 | orchestrator | 00:01:37.032 STDOUT terraform:  + updated_at = (known after apply) 2025-09-04 00:01:37.038381 | orchestrator | 00:01:37.032 STDOUT terraform:  } 2025-09-04 00:01:37.038385 | orchestrator | 00:01:37.032 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-04 00:01:37.038392 | orchestrator | 00:01:37.032 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-04 00:01:37.038396 | orchestrator | 00:01:37.032 STDOUT terraform:  + content = (known after apply) 2025-09-04 00:01:37.038401 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-04 00:01:37.038404 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-04 00:01:37.038408 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-04 00:01:37.038412 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-04 00:01:37.038416 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-04 00:01:37.038420 | orchestrator | 00:01:37.032 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-04 00:01:37.038424 | orchestrator | 00:01:37.032 STDOUT terraform:  + directory_permission = "0777" 2025-09-04 00:01:37.038428 | orchestrator | 00:01:37.032 STDOUT terraform:  + file_permission = "0644" 2025-09-04 00:01:37.038432 | orchestrator | 00:01:37.032 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-04 00:01:37.038435 | orchestrator | 00:01:37.032 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038439 | orchestrator | 00:01:37.032 STDOUT terraform:  } 2025-09-04 00:01:37.038443 | orchestrator | 00:01:37.032 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-04 00:01:37.038447 | orchestrator | 00:01:37.032 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-04 00:01:37.038451 | orchestrator | 00:01:37.032 STDOUT terraform:  + content = (known after apply) 2025-09-04 00:01:37.038455 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-04 00:01:37.038458 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-04 00:01:37.038462 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-04 00:01:37.038466 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-04 00:01:37.038470 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-04 00:01:37.038474 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-04 00:01:37.038478 | orchestrator | 00:01:37.033 STDOUT terraform:  + directory_permission = "0777" 2025-09-04 00:01:37.038481 | orchestrator | 00:01:37.033 STDOUT terraform:  + file_permission = "0644" 2025-09-04 00:01:37.038485 | orchestrator | 00:01:37.033 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-04 00:01:37.038489 | orchestrator | 00:01:37.033 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038493 | orchestrator | 00:01:37.033 STDOUT terraform:  } 2025-09-04 00:01:37.038499 | orchestrator | 00:01:37.033 STDOUT terraform:  # local_file.inventory will be created 2025-09-04 00:01:37.038503 | orchestrator | 00:01:37.033 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-04 00:01:37.038507 | orchestrator | 00:01:37.033 STDOUT terraform:  + content = (known after apply) 2025-09-04 00:01:37.038514 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-04 00:01:37.038518 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-04 00:01:37.038526 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-04 00:01:37.038530 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-04 00:01:37.038534 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-04 00:01:37.038538 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-04 00:01:37.038542 | orchestrator | 00:01:37.033 STDOUT terraform:  + directory_permission = "0777" 2025-09-04 00:01:37.038546 | orchestrator | 00:01:37.033 STDOUT terraform:  + file_permission = "0644" 2025-09-04 00:01:37.038549 | orchestrator | 00:01:37.033 STDOUT terraform:  + filename = "inventory.ci" 2025-09-04 00:01:37.038553 | orchestrator | 00:01:37.033 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038557 | orchestrator | 00:01:37.033 STDOUT terraform:  } 2025-09-04 00:01:37.038561 | orchestrator | 00:01:37.033 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-04 00:01:37.038565 | orchestrator | 00:01:37.033 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-04 00:01:37.038569 | orchestrator | 00:01:37.033 STDOUT terraform:  + content = (sensitive value) 2025-09-04 00:01:37.038573 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-04 00:01:37.038577 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-04 00:01:37.038580 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-04 00:01:37.038584 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-04 00:01:37.038588 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-04 00:01:37.038592 | orchestrator | 00:01:37.033 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-04 00:01:37.038596 | orchestrator | 00:01:37.033 STDOUT terraform:  + directory_permission = "0700" 2025-09-04 00:01:37.038599 | orchestrator | 00:01:37.033 STDOUT terraform:  + file_permission = "0600" 2025-09-04 00:01:37.038603 | orchestrator | 00:01:37.033 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-04 00:01:37.038607 | orchestrator | 00:01:37.034 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038611 | orchestrator | 00:01:37.034 STDOUT terraform:  } 2025-09-04 00:01:37.038615 | orchestrator | 00:01:37.034 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-04 00:01:37.038618 | orchestrator | 00:01:37.034 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-04 00:01:37.038622 | orchestrator | 00:01:37.034 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038626 | orchestrator | 00:01:37.034 STDOUT terraform:  } 2025-09-04 00:01:37.038630 | orchestrator | 00:01:37.034 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-04 00:01:37.038640 | orchestrator | 00:01:37.034 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-04 00:01:37.038643 | orchestrator | 00:01:37.034 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038647 | orchestrator | 00:01:37.034 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038651 | orchestrator | 00:01:37.034 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038655 | orchestrator | 00:01:37.034 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038659 | orchestrator | 00:01:37.034 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038663 | orchestrator | 00:01:37.034 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-04 00:01:37.038667 | orchestrator | 00:01:37.034 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038670 | orchestrator | 00:01:37.034 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038679 | orchestrator | 00:01:37.034 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038683 | orchestrator | 00:01:37.034 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038687 | orchestrator | 00:01:37.034 STDOUT terraform:  } 2025-09-04 00:01:37.038690 | orchestrator | 00:01:37.034 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-04 00:01:37.038694 | orchestrator | 00:01:37.034 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038698 | orchestrator | 00:01:37.034 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038702 | orchestrator | 00:01:37.034 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038706 | orchestrator | 00:01:37.034 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038709 | orchestrator | 00:01:37.034 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038713 | orchestrator | 00:01:37.034 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038717 | orchestrator | 00:01:37.034 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-04 00:01:37.038721 | orchestrator | 00:01:37.034 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038725 | orchestrator | 00:01:37.034 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038729 | orchestrator | 00:01:37.034 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038732 | orchestrator | 00:01:37.034 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038736 | orchestrator | 00:01:37.034 STDOUT terraform:  } 2025-09-04 00:01:37.038740 | orchestrator | 00:01:37.034 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-04 00:01:37.038744 | orchestrator | 00:01:37.034 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038748 | orchestrator | 00:01:37.034 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038755 | orchestrator | 00:01:37.035 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038759 | orchestrator | 00:01:37.035 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038763 | orchestrator | 00:01:37.035 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038766 | orchestrator | 00:01:37.035 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038770 | orchestrator | 00:01:37.035 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-04 00:01:37.038774 | orchestrator | 00:01:37.035 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038778 | orchestrator | 00:01:37.035 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038782 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038786 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038789 | orchestrator | 00:01:37.035 STDOUT terraform:  } 2025-09-04 00:01:37.038793 | orchestrator | 00:01:37.035 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-04 00:01:37.038797 | orchestrator | 00:01:37.035 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038801 | orchestrator | 00:01:37.035 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038807 | orchestrator | 00:01:37.035 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038811 | orchestrator | 00:01:37.035 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038815 | orchestrator | 00:01:37.035 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038819 | orchestrator | 00:01:37.035 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038822 | orchestrator | 00:01:37.035 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-04 00:01:37.038830 | orchestrator | 00:01:37.035 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038834 | orchestrator | 00:01:37.035 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038838 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038842 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038846 | orchestrator | 00:01:37.035 STDOUT terraform:  } 2025-09-04 00:01:37.038849 | orchestrator | 00:01:37.035 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-04 00:01:37.038853 | orchestrator | 00:01:37.035 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038857 | orchestrator | 00:01:37.035 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038861 | orchestrator | 00:01:37.035 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038865 | orchestrator | 00:01:37.035 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038868 | orchestrator | 00:01:37.035 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038872 | orchestrator | 00:01:37.035 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038880 | orchestrator | 00:01:37.035 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-04 00:01:37.038883 | orchestrator | 00:01:37.035 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038887 | orchestrator | 00:01:37.035 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038891 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038895 | orchestrator | 00:01:37.035 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038899 | orchestrator | 00:01:37.035 STDOUT terraform:  } 2025-09-04 00:01:37.038903 | orchestrator | 00:01:37.035 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-04 00:01:37.038909 | orchestrator | 00:01:37.035 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038913 | orchestrator | 00:01:37.036 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038917 | orchestrator | 00:01:37.036 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038921 | orchestrator | 00:01:37.036 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038925 | orchestrator | 00:01:37.036 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038928 | orchestrator | 00:01:37.036 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038932 | orchestrator | 00:01:37.036 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-04 00:01:37.038936 | orchestrator | 00:01:37.036 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038940 | orchestrator | 00:01:37.036 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.038944 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.038947 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.038951 | orchestrator | 00:01:37.036 STDOUT terraform:  } 2025-09-04 00:01:37.038955 | orchestrator | 00:01:37.036 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-04 00:01:37.038959 | orchestrator | 00:01:37.036 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-04 00:01:37.038963 | orchestrator | 00:01:37.036 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.038967 | orchestrator | 00:01:37.036 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.038970 | orchestrator | 00:01:37.036 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.038974 | orchestrator | 00:01:37.036 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.038981 | orchestrator | 00:01:37.036 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.038985 | orchestrator | 00:01:37.036 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-04 00:01:37.038989 | orchestrator | 00:01:37.036 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.038993 | orchestrator | 00:01:37.036 STDOUT terraform:  + size = 80 2025-09-04 00:01:37.039000 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039004 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039008 | orchestrator | 00:01:37.036 STDOUT terraform:  } 2025-09-04 00:01:37.039012 | orchestrator | 00:01:37.036 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-04 00:01:37.039016 | orchestrator | 00:01:37.036 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039022 | orchestrator | 00:01:37.036 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039026 | orchestrator | 00:01:37.036 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039030 | orchestrator | 00:01:37.036 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039034 | orchestrator | 00:01:37.036 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039038 | orchestrator | 00:01:37.036 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-04 00:01:37.039041 | orchestrator | 00:01:37.036 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039045 | orchestrator | 00:01:37.036 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039049 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039053 | orchestrator | 00:01:37.036 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039057 | orchestrator | 00:01:37.036 STDOUT terraform:  } 2025-09-04 00:01:37.039061 | orchestrator | 00:01:37.036 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-04 00:01:37.039064 | orchestrator | 00:01:37.037 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039068 | orchestrator | 00:01:37.037 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039072 | orchestrator | 00:01:37.037 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039076 | orchestrator | 00:01:37.037 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039079 | orchestrator | 00:01:37.037 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039083 | orchestrator | 00:01:37.037 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-04 00:01:37.039087 | orchestrator | 00:01:37.037 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039091 | orchestrator | 00:01:37.037 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039095 | orchestrator | 00:01:37.037 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039099 | orchestrator | 00:01:37.037 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039102 | orchestrator | 00:01:37.037 STDOUT terraform:  } 2025-09-04 00:01:37.039106 | orchestrator | 00:01:37.037 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-04 00:01:37.039110 | orchestrator | 00:01:37.037 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039114 | orchestrator | 00:01:37.037 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039121 | orchestrator | 00:01:37.037 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039125 | orchestrator | 00:01:37.037 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039129 | orchestrator | 00:01:37.037 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039137 | orchestrator | 00:01:37.037 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-04 00:01:37.039141 | orchestrator | 00:01:37.037 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039145 | orchestrator | 00:01:37.037 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039154 | orchestrator | 00:01:37.037 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039158 | orchestrator | 00:01:37.037 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039161 | orchestrator | 00:01:37.037 STDOUT terraform:  } 2025-09-04 00:01:37.039165 | orchestrator | 00:01:37.038 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-04 00:01:37.039169 | orchestrator | 00:01:37.038 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039173 | orchestrator | 00:01:37.038 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039177 | orchestrator | 00:01:37.038 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039181 | orchestrator | 00:01:37.038 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039185 | orchestrator | 00:01:37.038 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039189 | orchestrator | 00:01:37.038 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-04 00:01:37.039192 | orchestrator | 00:01:37.038 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039196 | orchestrator | 00:01:37.038 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039200 | orchestrator | 00:01:37.038 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039204 | orchestrator | 00:01:37.038 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039208 | orchestrator | 00:01:37.038 STDOUT terraform:  } 2025-09-04 00:01:37.039212 | orchestrator | 00:01:37.038 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-04 00:01:37.039216 | orchestrator | 00:01:37.038 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039220 | orchestrator | 00:01:37.038 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039224 | orchestrator | 00:01:37.038 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039227 | orchestrator | 00:01:37.038 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039231 | orchestrator | 00:01:37.038 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039235 | orchestrator | 00:01:37.038 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-04 00:01:37.039239 | orchestrator | 00:01:37.038 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039247 | orchestrator | 00:01:37.038 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039250 | orchestrator | 00:01:37.038 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039254 | orchestrator | 00:01:37.038 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039258 | orchestrator | 00:01:37.038 STDOUT terraform:  } 2025-09-04 00:01:37.039262 | orchestrator | 00:01:37.038 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-04 00:01:37.039266 | orchestrator | 00:01:37.038 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039270 | orchestrator | 00:01:37.038 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039274 | orchestrator | 00:01:37.038 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039277 | orchestrator | 00:01:37.038 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039281 | orchestrator | 00:01:37.038 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.039285 | orchestrator | 00:01:37.038 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-04 00:01:37.039289 | orchestrator | 00:01:37.038 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.039299 | orchestrator | 00:01:37.038 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.039303 | orchestrator | 00:01:37.039 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.039307 | orchestrator | 00:01:37.039 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.039311 | orchestrator | 00:01:37.039 STDOUT terraform:  } 2025-09-04 00:01:37.039315 | orchestrator | 00:01:37.039 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-04 00:01:37.039318 | orchestrator | 00:01:37.039 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.039322 | orchestrator | 00:01:37.039 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.039326 | orchestrator | 00:01:37.039 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.039330 | orchestrator | 00:01:37.039 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.039334 | orchestrator | 00:01:37.039 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.050301 | orchestrator | 00:01:37.039 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-04 00:01:37.050450 | orchestrator | 00:01:37.046 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.050461 | orchestrator | 00:01:37.046 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.050466 | orchestrator | 00:01:37.046 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.050472 | orchestrator | 00:01:37.046 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.050477 | orchestrator | 00:01:37.046 STDOUT terraform:  } 2025-09-04 00:01:37.050482 | orchestrator | 00:01:37.046 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-04 00:01:37.050487 | orchestrator | 00:01:37.046 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.050501 | orchestrator | 00:01:37.046 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.050506 | orchestrator | 00:01:37.046 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.050510 | orchestrator | 00:01:37.046 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.050514 | orchestrator | 00:01:37.046 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.050517 | orchestrator | 00:01:37.046 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-04 00:01:37.050521 | orchestrator | 00:01:37.046 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.050525 | orchestrator | 00:01:37.046 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.050529 | orchestrator | 00:01:37.046 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.050533 | orchestrator | 00:01:37.046 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.050537 | orchestrator | 00:01:37.046 STDOUT terraform:  } 2025-09-04 00:01:37.050541 | orchestrator | 00:01:37.046 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-04 00:01:37.050545 | orchestrator | 00:01:37.046 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-04 00:01:37.050548 | orchestrator | 00:01:37.046 STDOUT terraform:  + attachment = (known after apply) 2025-09-04 00:01:37.050552 | orchestrator | 00:01:37.046 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.050557 | orchestrator | 00:01:37.046 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.050561 | orchestrator | 00:01:37.046 STDOUT terraform:  + metadata = (known after apply) 2025-09-04 00:01:37.050565 | orchestrator | 00:01:37.046 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-04 00:01:37.050569 | orchestrator | 00:01:37.047 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.050573 | orchestrator | 00:01:37.047 STDOUT terraform:  + size = 20 2025-09-04 00:01:37.050577 | orchestrator | 00:01:37.047 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-04 00:01:37.050582 | orchestrator | 00:01:37.047 STDOUT terraform:  + volume_type = "ssd" 2025-09-04 00:01:37.050586 | orchestrator | 00:01:37.047 STDOUT terraform:  } 2025-09-04 00:01:37.050597 | orchestrator | 00:01:37.047 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-04 00:01:37.050602 | orchestrator | 00:01:37.047 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-04 00:01:37.050606 | orchestrator | 00:01:37.047 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.050611 | orchestrator | 00:01:37.047 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.050615 | orchestrator | 00:01:37.047 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.050620 | orchestrator | 00:01:37.047 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.050623 | orchestrator | 00:01:37.047 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.050641 | orchestrator | 00:01:37.047 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.050646 | orchestrator | 00:01:37.047 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.050650 | orchestrator | 00:01:37.047 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.050653 | orchestrator | 00:01:37.047 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-04 00:01:37.050657 | orchestrator | 00:01:37.047 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.050661 | orchestrator | 00:01:37.047 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.050665 | orchestrator | 00:01:37.047 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.050669 | orchestrator | 00:01:37.047 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.050673 | orchestrator | 00:01:37.047 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.050705 | orchestrator | 00:01:37.047 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.050709 | orchestrator | 00:01:37.047 STDOUT terraform:  + name = "testbed-manager" 2025-09-04 00:01:37.050713 | orchestrator | 00:01:37.047 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.050717 | orchestrator | 00:01:37.047 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.050721 | orchestrator | 00:01:37.047 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.050725 | orchestrator | 00:01:37.047 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.050732 | orchestrator | 00:01:37.047 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.050736 | orchestrator | 00:01:37.047 STDOUT terraform:  + user_data = (sensitive value) 2025-09-04 00:01:37.050740 | orchestrator | 00:01:37.047 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.050744 | orchestrator | 00:01:37.047 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.050748 | orchestrator | 00:01:37.047 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.050752 | orchestrator | 00:01:37.047 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.050755 | orchestrator | 00:01:37.047 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.050759 | orchestrator | 00:01:37.048 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.050763 | orchestrator | 00:01:37.048 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.050767 | orchestrator | 00:01:37.048 STDOUT terraform:  } 2025-09-04 00:01:37.050771 | orchestrator | 00:01:37.048 STDOUT terraform:  + network { 2025-09-04 00:01:37.050775 | orchestrator | 00:01:37.048 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.050779 | orchestrator | 00:01:37.048 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.050783 | orchestrator | 00:01:37.048 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.050787 | orchestrator | 00:01:37.048 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.050794 | orchestrator | 00:01:37.048 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.050798 | orchestrator | 00:01:37.048 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.050802 | orchestrator | 00:01:37.048 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.050806 | orchestrator | 00:01:37.048 STDOUT terraform:  } 2025-09-04 00:01:37.050810 | orchestrator | 00:01:37.048 STDOUT terraform:  } 2025-09-04 00:01:37.050814 | orchestrator | 00:01:37.048 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-04 00:01:37.050818 | orchestrator | 00:01:37.048 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.050822 | orchestrator | 00:01:37.048 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.050833 | orchestrator | 00:01:37.048 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.050837 | orchestrator | 00:01:37.048 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.050841 | orchestrator | 00:01:37.048 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.050845 | orchestrator | 00:01:37.048 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.050849 | orchestrator | 00:01:37.048 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.050853 | orchestrator | 00:01:37.048 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.050857 | orchestrator | 00:01:37.048 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.050860 | orchestrator | 00:01:37.048 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.050864 | orchestrator | 00:01:37.048 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.050868 | orchestrator | 00:01:37.048 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.050872 | orchestrator | 00:01:37.048 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.050876 | orchestrator | 00:01:37.048 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.050879 | orchestrator | 00:01:37.048 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.050883 | orchestrator | 00:01:37.048 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.050887 | orchestrator | 00:01:37.048 STDOUT terraform:  + name = "testbed-node-0" 2025-09-04 00:01:37.050891 | orchestrator | 00:01:37.048 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.050895 | orchestrator | 00:01:37.048 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.050898 | orchestrator | 00:01:37.048 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.050902 | orchestrator | 00:01:37.048 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.050906 | orchestrator | 00:01:37.049 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.050910 | orchestrator | 00:01:37.049 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.050914 | orchestrator | 00:01:37.049 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.050921 | orchestrator | 00:01:37.049 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.050925 | orchestrator | 00:01:37.049 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.050928 | orchestrator | 00:01:37.049 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.050932 | orchestrator | 00:01:37.049 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.050936 | orchestrator | 00:01:37.049 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.050940 | orchestrator | 00:01:37.049 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.050944 | orchestrator | 00:01:37.049 STDOUT terraform:  } 2025-09-04 00:01:37.050947 | orchestrator | 00:01:37.049 STDOUT terraform:  + network { 2025-09-04 00:01:37.050951 | orchestrator | 00:01:37.049 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.050955 | orchestrator | 00:01:37.049 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.050959 | orchestrator | 00:01:37.049 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.050963 | orchestrator | 00:01:37.049 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.050967 | orchestrator | 00:01:37.049 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.050971 | orchestrator | 00:01:37.049 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.050974 | orchestrator | 00:01:37.049 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.050978 | orchestrator | 00:01:37.049 STDOUT terraform:  } 2025-09-04 00:01:37.050982 | orchestrator | 00:01:37.049 STDOUT terraform:  } 2025-09-04 00:01:37.050989 | orchestrator | 00:01:37.049 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-04 00:01:37.050993 | orchestrator | 00:01:37.049 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.050997 | orchestrator | 00:01:37.049 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.051001 | orchestrator | 00:01:37.049 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.051005 | orchestrator | 00:01:37.049 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.051009 | orchestrator | 00:01:37.049 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.051013 | orchestrator | 00:01:37.049 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.051016 | orchestrator | 00:01:37.049 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.051020 | orchestrator | 00:01:37.049 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.051024 | orchestrator | 00:01:37.049 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.051028 | orchestrator | 00:01:37.049 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.051032 | orchestrator | 00:01:37.049 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.051038 | orchestrator | 00:01:37.049 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.051049 | orchestrator | 00:01:37.049 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.051053 | orchestrator | 00:01:37.050 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.051057 | orchestrator | 00:01:37.050 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.051061 | orchestrator | 00:01:37.050 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.051065 | orchestrator | 00:01:37.050 STDOUT terraform:  + name = "testbed-node-1" 2025-09-04 00:01:37.051069 | orchestrator | 00:01:37.050 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.054167 | orchestrator | 00:01:37.050 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.058652 | orchestrator | 00:01:37.054 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.058689 | orchestrator | 00:01:37.058 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.058710 | orchestrator | 00:01:37.058 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.058766 | orchestrator | 00:01:37.058 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.058775 | orchestrator | 00:01:37.058 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.058808 | orchestrator | 00:01:37.058 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.058847 | orchestrator | 00:01:37.058 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.058868 | orchestrator | 00:01:37.058 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.058900 | orchestrator | 00:01:37.058 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.058928 | orchestrator | 00:01:37.058 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.058968 | orchestrator | 00:01:37.058 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.058976 | orchestrator | 00:01:37.058 STDOUT terraform:  } 2025-09-04 00:01:37.058982 | orchestrator | 00:01:37.058 STDOUT terraform:  + network { 2025-09-04 00:01:37.059013 | orchestrator | 00:01:37.058 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.059043 | orchestrator | 00:01:37.059 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.059073 | orchestrator | 00:01:37.059 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.059105 | orchestrator | 00:01:37.059 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.059137 | orchestrator | 00:01:37.059 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.059167 | orchestrator | 00:01:37.059 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.059198 | orchestrator | 00:01:37.059 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.059206 | orchestrator | 00:01:37.059 STDOUT terraform:  } 2025-09-04 00:01:37.059214 | orchestrator | 00:01:37.059 STDOUT terraform:  } 2025-09-04 00:01:37.059332 | orchestrator | 00:01:37.059 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-04 00:01:37.059388 | orchestrator | 00:01:37.059 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.059423 | orchestrator | 00:01:37.059 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.059459 | orchestrator | 00:01:37.059 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.059497 | orchestrator | 00:01:37.059 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.059532 | orchestrator | 00:01:37.059 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.059558 | orchestrator | 00:01:37.059 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.059579 | orchestrator | 00:01:37.059 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.059615 | orchestrator | 00:01:37.059 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.059652 | orchestrator | 00:01:37.059 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.059682 | orchestrator | 00:01:37.059 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.059708 | orchestrator | 00:01:37.059 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.059743 | orchestrator | 00:01:37.059 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.059779 | orchestrator | 00:01:37.059 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.059815 | orchestrator | 00:01:37.059 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.059851 | orchestrator | 00:01:37.059 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.059878 | orchestrator | 00:01:37.059 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.059910 | orchestrator | 00:01:37.059 STDOUT terraform:  + name = "testbed-node-2" 2025-09-04 00:01:37.059937 | orchestrator | 00:01:37.059 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.059974 | orchestrator | 00:01:37.059 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.060007 | orchestrator | 00:01:37.059 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.060030 | orchestrator | 00:01:37.060 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.060067 | orchestrator | 00:01:37.060 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.060131 | orchestrator | 00:01:37.060 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.060138 | orchestrator | 00:01:37.060 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.060144 | orchestrator | 00:01:37.060 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.060195 | orchestrator | 00:01:37.060 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.060208 | orchestrator | 00:01:37.060 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.060233 | orchestrator | 00:01:37.060 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.060261 | orchestrator | 00:01:37.060 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.060300 | orchestrator | 00:01:37.060 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.060315 | orchestrator | 00:01:37.060 STDOUT terraform:  } 2025-09-04 00:01:37.060319 | orchestrator | 00:01:37.060 STDOUT terraform:  + network { 2025-09-04 00:01:37.060357 | orchestrator | 00:01:37.060 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.060383 | orchestrator | 00:01:37.060 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.060414 | orchestrator | 00:01:37.060 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.060447 | orchestrator | 00:01:37.060 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.060477 | orchestrator | 00:01:37.060 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.060512 | orchestrator | 00:01:37.060 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.060542 | orchestrator | 00:01:37.060 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.060549 | orchestrator | 00:01:37.060 STDOUT terraform:  } 2025-09-04 00:01:37.060558 | orchestrator | 00:01:37.060 STDOUT terraform:  } 2025-09-04 00:01:37.060605 | orchestrator | 00:01:37.060 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-04 00:01:37.060646 | orchestrator | 00:01:37.060 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.060679 | orchestrator | 00:01:37.060 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.060714 | orchestrator | 00:01:37.060 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.060750 | orchestrator | 00:01:37.060 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.060787 | orchestrator | 00:01:37.060 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.060814 | orchestrator | 00:01:37.060 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.060834 | orchestrator | 00:01:37.060 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.060872 | orchestrator | 00:01:37.060 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.060908 | orchestrator | 00:01:37.060 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.060937 | orchestrator | 00:01:37.060 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.060962 | orchestrator | 00:01:37.060 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.060997 | orchestrator | 00:01:37.060 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.061032 | orchestrator | 00:01:37.060 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.061068 | orchestrator | 00:01:37.061 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.061102 | orchestrator | 00:01:37.061 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.061127 | orchestrator | 00:01:37.061 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.061158 | orchestrator | 00:01:37.061 STDOUT terraform:  + name = "testbed-node-3" 2025-09-04 00:01:37.061185 | orchestrator | 00:01:37.061 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.061221 | orchestrator | 00:01:37.061 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.061254 | orchestrator | 00:01:37.061 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.061280 | orchestrator | 00:01:37.061 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.061314 | orchestrator | 00:01:37.061 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.061387 | orchestrator | 00:01:37.061 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.061395 | orchestrator | 00:01:37.061 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.061401 | orchestrator | 00:01:37.061 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.061434 | orchestrator | 00:01:37.061 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.061464 | orchestrator | 00:01:37.061 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.061492 | orchestrator | 00:01:37.061 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.061523 | orchestrator | 00:01:37.061 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.061561 | orchestrator | 00:01:37.061 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.061570 | orchestrator | 00:01:37.061 STDOUT terraform:  } 2025-09-04 00:01:37.061576 | orchestrator | 00:01:37.061 STDOUT terraform:  + network { 2025-09-04 00:01:37.061601 | orchestrator | 00:01:37.061 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.061633 | orchestrator | 00:01:37.061 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.061663 | orchestrator | 00:01:37.061 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.061693 | orchestrator | 00:01:37.061 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.061736 | orchestrator | 00:01:37.061 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.061768 | orchestrator | 00:01:37.061 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.061800 | orchestrator | 00:01:37.061 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.061808 | orchestrator | 00:01:37.061 STDOUT terraform:  } 2025-09-04 00:01:37.061818 | orchestrator | 00:01:37.061 STDOUT terraform:  } 2025-09-04 00:01:37.061866 | orchestrator | 00:01:37.061 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-04 00:01:37.061907 | orchestrator | 00:01:37.061 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.061941 | orchestrator | 00:01:37.061 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.061977 | orchestrator | 00:01:37.061 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.062068 | orchestrator | 00:01:37.061 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.068601 | orchestrator | 00:01:37.062 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.068636 | orchestrator | 00:01:37.068 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.068650 | orchestrator | 00:01:37.068 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.068680 | orchestrator | 00:01:37.068 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.068713 | orchestrator | 00:01:37.068 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.068749 | orchestrator | 00:01:37.068 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.068776 | orchestrator | 00:01:37.068 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.068808 | orchestrator | 00:01:37.068 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.068846 | orchestrator | 00:01:37.068 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.068882 | orchestrator | 00:01:37.068 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.068923 | orchestrator | 00:01:37.068 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.068931 | orchestrator | 00:01:37.068 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.068969 | orchestrator | 00:01:37.068 STDOUT terraform:  + name = "testbed-node-4" 2025-09-04 00:01:37.068999 | orchestrator | 00:01:37.068 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.069029 | orchestrator | 00:01:37.068 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.069064 | orchestrator | 00:01:37.069 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.069089 | orchestrator | 00:01:37.069 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.069126 | orchestrator | 00:01:37.069 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.069191 | orchestrator | 00:01:37.069 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.069199 | orchestrator | 00:01:37.069 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.069226 | orchestrator | 00:01:37.069 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.069257 | orchestrator | 00:01:37.069 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.069287 | orchestrator | 00:01:37.069 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.069313 | orchestrator | 00:01:37.069 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.069385 | orchestrator | 00:01:37.069 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.069410 | orchestrator | 00:01:37.069 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.069419 | orchestrator | 00:01:37.069 STDOUT terraform:  } 2025-09-04 00:01:37.069426 | orchestrator | 00:01:37.069 STDOUT terraform:  + network { 2025-09-04 00:01:37.069452 | orchestrator | 00:01:37.069 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.069485 | orchestrator | 00:01:37.069 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.069519 | orchestrator | 00:01:37.069 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.069550 | orchestrator | 00:01:37.069 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.069581 | orchestrator | 00:01:37.069 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.069613 | orchestrator | 00:01:37.069 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.069646 | orchestrator | 00:01:37.069 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.069654 | orchestrator | 00:01:37.069 STDOUT terraform:  } 2025-09-04 00:01:37.069674 | orchestrator | 00:01:37.069 STDOUT terraform:  } 2025-09-04 00:01:37.069720 | orchestrator | 00:01:37.069 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-04 00:01:37.069761 | orchestrator | 00:01:37.069 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-04 00:01:37.069795 | orchestrator | 00:01:37.069 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-04 00:01:37.069831 | orchestrator | 00:01:37.069 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-04 00:01:37.069866 | orchestrator | 00:01:37.069 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-04 00:01:37.069902 | orchestrator | 00:01:37.069 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.069928 | orchestrator | 00:01:37.069 STDOUT terraform:  + availability_zone = "nova" 2025-09-04 00:01:37.069952 | orchestrator | 00:01:37.069 STDOUT terraform:  + config_drive = true 2025-09-04 00:01:37.069988 | orchestrator | 00:01:37.069 STDOUT terraform:  + created = (known after apply) 2025-09-04 00:01:37.070062 | orchestrator | 00:01:37.069 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-04 00:01:37.070126 | orchestrator | 00:01:37.070 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-04 00:01:37.070152 | orchestrator | 00:01:37.070 STDOUT terraform:  + force_delete = false 2025-09-04 00:01:37.070217 | orchestrator | 00:01:37.070 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-04 00:01:37.070277 | orchestrator | 00:01:37.070 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.070283 | orchestrator | 00:01:37.070 STDOUT terraform:  + image_id = (known after apply) 2025-09-04 00:01:37.070320 | orchestrator | 00:01:37.070 STDOUT terraform:  + image_name = (known after apply) 2025-09-04 00:01:37.070359 | orchestrator | 00:01:37.070 STDOUT terraform:  + key_pair = "testbed" 2025-09-04 00:01:37.070389 | orchestrator | 00:01:37.070 STDOUT terraform:  + name = "testbed-node-5" 2025-09-04 00:01:37.070413 | orchestrator | 00:01:37.070 STDOUT terraform:  + power_state = "active" 2025-09-04 00:01:37.070449 | orchestrator | 00:01:37.070 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.070485 | orchestrator | 00:01:37.070 STDOUT terraform:  + security_groups = (known after apply) 2025-09-04 00:01:37.070509 | orchestrator | 00:01:37.070 STDOUT terraform:  + stop_before_destroy = false 2025-09-04 00:01:37.070544 | orchestrator | 00:01:37.070 STDOUT terraform:  + updated = (known after apply) 2025-09-04 00:01:37.070601 | orchestrator | 00:01:37.070 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-04 00:01:37.070622 | orchestrator | 00:01:37.070 STDOUT terraform:  + block_device { 2025-09-04 00:01:37.070649 | orchestrator | 00:01:37.070 STDOUT terraform:  + boot_index = 0 2025-09-04 00:01:37.070680 | orchestrator | 00:01:37.070 STDOUT terraform:  + delete_on_termination = false 2025-09-04 00:01:37.070710 | orchestrator | 00:01:37.070 STDOUT terraform:  + destination_type = "volume" 2025-09-04 00:01:37.070738 | orchestrator | 00:01:37.070 STDOUT terraform:  + multiattach = false 2025-09-04 00:01:37.070768 | orchestrator | 00:01:37.070 STDOUT terraform:  + source_type = "volume" 2025-09-04 00:01:37.070808 | orchestrator | 00:01:37.070 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.070816 | orchestrator | 00:01:37.070 STDOUT terraform:  } 2025-09-04 00:01:37.070822 | orchestrator | 00:01:37.070 STDOUT terraform:  + network { 2025-09-04 00:01:37.070849 | orchestrator | 00:01:37.070 STDOUT terraform:  + access_network = false 2025-09-04 00:01:37.070881 | orchestrator | 00:01:37.070 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-04 00:01:37.070909 | orchestrator | 00:01:37.070 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-04 00:01:37.070940 | orchestrator | 00:01:37.070 STDOUT terraform:  + mac = (known after apply) 2025-09-04 00:01:37.070975 | orchestrator | 00:01:37.070 STDOUT terraform:  + name = (known after apply) 2025-09-04 00:01:37.071011 | orchestrator | 00:01:37.070 STDOUT terraform:  + port = (known after apply) 2025-09-04 00:01:37.071042 | orchestrator | 00:01:37.070 STDOUT terraform:  + uuid = (known after apply) 2025-09-04 00:01:37.071049 | orchestrator | 00:01:37.071 STDOUT terraform:  } 2025-09-04 00:01:37.071054 | orchestrator | 00:01:37.071 STDOUT terraform:  } 2025-09-04 00:01:37.071095 | orchestrator | 00:01:37.071 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-04 00:01:37.071129 | orchestrator | 00:01:37.071 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-04 00:01:37.071157 | orchestrator | 00:01:37.071 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-04 00:01:37.071186 | orchestrator | 00:01:37.071 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.071208 | orchestrator | 00:01:37.071 STDOUT terraform:  + name = "testbed" 2025-09-04 00:01:37.071234 | orchestrator | 00:01:37.071 STDOUT terraform:  + private_key = (sensitive value) 2025-09-04 00:01:37.071265 | orchestrator | 00:01:37.071 STDOUT terraform:  + public_key = (known after apply) 2025-09-04 00:01:37.071292 | orchestrator | 00:01:37.071 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.071323 | orchestrator | 00:01:37.071 STDOUT terraform:  + user_id = (known after apply) 2025-09-04 00:01:37.071330 | orchestrator | 00:01:37.071 STDOUT terraform:  } 2025-09-04 00:01:37.071394 | orchestrator | 00:01:37.071 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-04 00:01:37.071440 | orchestrator | 00:01:37.071 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.071468 | orchestrator | 00:01:37.071 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.071497 | orchestrator | 00:01:37.071 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.071525 | orchestrator | 00:01:37.071 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.071554 | orchestrator | 00:01:37.071 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.071583 | orchestrator | 00:01:37.071 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.071590 | orchestrator | 00:01:37.071 STDOUT terraform:  } 2025-09-04 00:01:37.071645 | orchestrator | 00:01:37.071 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-04 00:01:37.071694 | orchestrator | 00:01:37.071 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.071723 | orchestrator | 00:01:37.071 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.071749 | orchestrator | 00:01:37.071 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.071777 | orchestrator | 00:01:37.071 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.071808 | orchestrator | 00:01:37.071 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.071836 | orchestrator | 00:01:37.071 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.071844 | orchestrator | 00:01:37.071 STDOUT terraform:  } 2025-09-04 00:01:37.071896 | orchestrator | 00:01:37.071 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-04 00:01:37.071944 | orchestrator | 00:01:37.071 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.071972 | orchestrator | 00:01:37.071 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.071999 | orchestrator | 00:01:37.071 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.072029 | orchestrator | 00:01:37.071 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.072059 | orchestrator | 00:01:37.072 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.072089 | orchestrator | 00:01:37.072 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.072096 | orchestrator | 00:01:37.072 STDOUT terraform:  } 2025-09-04 00:01:37.072149 | orchestrator | 00:01:37.072 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-04 00:01:37.072196 | orchestrator | 00:01:37.072 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.072223 | orchestrator | 00:01:37.072 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.072254 | orchestrator | 00:01:37.072 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.072281 | orchestrator | 00:01:37.072 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.072309 | orchestrator | 00:01:37.072 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.072338 | orchestrator | 00:01:37.072 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.072384 | orchestrator | 00:01:37.072 STDOUT terraform:  } 2025-09-04 00:01:37.072407 | orchestrator | 00:01:37.072 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-04 00:01:37.072455 | orchestrator | 00:01:37.072 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.072484 | orchestrator | 00:01:37.072 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.072512 | orchestrator | 00:01:37.072 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.072541 | orchestrator | 00:01:37.072 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.072570 | orchestrator | 00:01:37.072 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.072598 | orchestrator | 00:01:37.072 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.072605 | orchestrator | 00:01:37.072 STDOUT terraform:  } 2025-09-04 00:01:37.072659 | orchestrator | 00:01:37.072 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-04 00:01:37.072706 | orchestrator | 00:01:37.072 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.072734 | orchestrator | 00:01:37.072 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.072764 | orchestrator | 00:01:37.072 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.072793 | orchestrator | 00:01:37.072 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.072822 | orchestrator | 00:01:37.072 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.072851 | orchestrator | 00:01:37.072 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.072858 | orchestrator | 00:01:37.072 STDOUT terraform:  } 2025-09-04 00:01:37.072909 | orchestrator | 00:01:37.072 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-04 00:01:37.072956 | orchestrator | 00:01:37.072 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.072985 | orchestrator | 00:01:37.072 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.073013 | orchestrator | 00:01:37.072 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.073042 | orchestrator | 00:01:37.073 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.073069 | orchestrator | 00:01:37.073 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.073098 | orchestrator | 00:01:37.073 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.073105 | orchestrator | 00:01:37.073 STDOUT terraform:  } 2025-09-04 00:01:37.073159 | orchestrator | 00:01:37.073 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-04 00:01:37.073209 | orchestrator | 00:01:37.073 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.073237 | orchestrator | 00:01:37.073 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.073263 | orchestrator | 00:01:37.073 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.073291 | orchestrator | 00:01:37.073 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.073320 | orchestrator | 00:01:37.073 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.073369 | orchestrator | 00:01:37.073 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.073378 | orchestrator | 00:01:37.073 STDOUT terraform:  } 2025-09-04 00:01:37.073431 | orchestrator | 00:01:37.073 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-04 00:01:37.073481 | orchestrator | 00:01:37.073 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-04 00:01:37.073515 | orchestrator | 00:01:37.073 STDOUT terraform:  + device = (known after apply) 2025-09-04 00:01:37.073536 | orchestrator | 00:01:37.073 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.073566 | orchestrator | 00:01:37.073 STDOUT terraform:  + instance_id = (known after apply) 2025-09-04 00:01:37.073596 | orchestrator | 00:01:37.073 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.073625 | orchestrator | 00:01:37.073 STDOUT terraform:  + volume_id = (known after apply) 2025-09-04 00:01:37.073632 | orchestrator | 00:01:37.073 STDOUT terraform:  } 2025-09-04 00:01:37.073698 | orchestrator | 00:01:37.073 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-04 00:01:37.073749 | orchestrator | 00:01:37.073 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-04 00:01:37.073778 | orchestrator | 00:01:37.073 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-04 00:01:37.073806 | orchestrator | 00:01:37.073 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-04 00:01:37.073836 | orchestrator | 00:01:37.073 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.073870 | orchestrator | 00:01:37.073 STDOUT terraform:  + port_id = (known after apply) 2025-09-04 00:01:37.073896 | orchestrator | 00:01:37.073 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.073903 | orchestrator | 00:01:37.073 STDOUT terraform:  } 2025-09-04 00:01:37.073955 | orchestrator | 00:01:37.073 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-04 00:01:37.074002 | orchestrator | 00:01:37.073 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-04 00:01:37.090389 | orchestrator | 00:01:37.073 STDOUT terraform:  + address = (known after apply) 2025-09-04 00:01:37.090434 | orchestrator | 00:01:37.078 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.090441 | orchestrator | 00:01:37.078 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-04 00:01:37.090447 | orchestrator | 00:01:37.078 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.090452 | orchestrator | 00:01:37.078 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-04 00:01:37.090457 | orchestrator | 00:01:37.078 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.090463 | orchestrator | 00:01:37.078 STDOUT terraform:  + pool = "public" 2025-09-04 00:01:37.090468 | orchestrator | 00:01:37.078 STDOUT terraform:  + port_id = (known after apply) 2025-09-04 00:01:37.090472 | orchestrator | 00:01:37.078 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.090477 | orchestrator | 00:01:37.078 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.090492 | orchestrator | 00:01:37.078 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.090496 | orchestrator | 00:01:37.078 STDOUT terraform:  } 2025-09-04 00:01:37.090501 | orchestrator | 00:01:37.078 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-04 00:01:37.090507 | orchestrator | 00:01:37.078 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-04 00:01:37.090512 | orchestrator | 00:01:37.078 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.090517 | orchestrator | 00:01:37.078 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.090522 | orchestrator | 00:01:37.078 STDOUT terraform:  + availability_zone_hints = [ 2025-09-04 00:01:37.090528 | orchestrator | 00:01:37.079 STDOUT terraform:  + "nova", 2025-09-04 00:01:37.090533 | orchestrator | 00:01:37.079 STDOUT terraform:  ] 2025-09-04 00:01:37.090537 | orchestrator | 00:01:37.079 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-04 00:01:37.090543 | orchestrator | 00:01:37.079 STDOUT terraform:  + external = (known after apply) 2025-09-04 00:01:37.090547 | orchestrator | 00:01:37.079 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.090552 | orchestrator | 00:01:37.079 STDOUT terraform:  + mtu = (known after apply) 2025-09-04 00:01:37.090557 | orchestrator | 00:01:37.079 STDOUT terraform:  + name = "net-testbed-management" 2025-09-04 00:01:37.090562 | orchestrator | 00:01:37.079 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.090567 | orchestrator | 00:01:37.079 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.090572 | orchestrator | 00:01:37.079 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.090581 | orchestrator | 00:01:37.079 STDOUT terraform:  + shared = (known after apply) 2025-09-04 00:01:37.090585 | orchestrator | 00:01:37.079 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.090589 | orchestrator | 00:01:37.079 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-04 00:01:37.090592 | orchestrator | 00:01:37.079 STDOUT terraform:  + segments (known after apply) 2025-09-04 00:01:37.090596 | orchestrator | 00:01:37.079 STDOUT terraform:  } 2025-09-04 00:01:37.090601 | orchestrator | 00:01:37.079 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-04 00:01:37.090605 | orchestrator | 00:01:37.079 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-04 00:01:37.090608 | orchestrator | 00:01:37.079 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.090613 | orchestrator | 00:01:37.079 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.090616 | orchestrator | 00:01:37.079 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.090629 | orchestrator | 00:01:37.079 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.090633 | orchestrator | 00:01:37.079 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.090641 | orchestrator | 00:01:37.079 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.090645 | orchestrator | 00:01:37.079 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.090648 | orchestrator | 00:01:37.079 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.090652 | orchestrator | 00:01:37.079 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.090656 | orchestrator | 00:01:37.079 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.090660 | orchestrator | 00:01:37.079 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.090664 | orchestrator | 00:01:37.079 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.090667 | orchestrator | 00:01:37.079 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.090671 | orchestrator | 00:01:37.079 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.090675 | orchestrator | 00:01:37.079 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.090679 | orchestrator | 00:01:37.079 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.090683 | orchestrator | 00:01:37.079 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090687 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.090690 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090694 | orchestrator | 00:01:37.080 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090698 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.090702 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090706 | orchestrator | 00:01:37.080 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.090710 | orchestrator | 00:01:37.080 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.090714 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-04 00:01:37.090718 | orchestrator | 00:01:37.080 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.090721 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090725 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090729 | orchestrator | 00:01:37.080 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-04 00:01:37.090733 | orchestrator | 00:01:37.080 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.090737 | orchestrator | 00:01:37.080 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.090741 | orchestrator | 00:01:37.080 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.090745 | orchestrator | 00:01:37.080 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.090749 | orchestrator | 00:01:37.080 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.090753 | orchestrator | 00:01:37.080 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.090760 | orchestrator | 00:01:37.080 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.090764 | orchestrator | 00:01:37.080 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.090767 | orchestrator | 00:01:37.080 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.090771 | orchestrator | 00:01:37.080 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.090778 | orchestrator | 00:01:37.080 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.090788 | orchestrator | 00:01:37.080 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.090792 | orchestrator | 00:01:37.080 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.090796 | orchestrator | 00:01:37.080 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.090800 | orchestrator | 00:01:37.080 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.090804 | orchestrator | 00:01:37.080 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.090807 | orchestrator | 00:01:37.080 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.090811 | orchestrator | 00:01:37.080 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090815 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.090819 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090823 | orchestrator | 00:01:37.080 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090827 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.090830 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090834 | orchestrator | 00:01:37.080 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090838 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.090842 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090846 | orchestrator | 00:01:37.080 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090849 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.090853 | orchestrator | 00:01:37.080 STDOUT terraform:  } 2025-09-04 00:01:37.090857 | orchestrator | 00:01:37.080 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.090861 | orchestrator | 00:01:37.080 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.090865 | orchestrator | 00:01:37.080 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-04 00:01:37.090869 | orchestrator | 00:01:37.081 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.090873 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.090876 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.090880 | orchestrator | 00:01:37.081 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-04 00:01:37.090884 | orchestrator | 00:01:37.081 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.090891 | orchestrator | 00:01:37.081 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.090895 | orchestrator | 00:01:37.081 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.090899 | orchestrator | 00:01:37.081 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.090905 | orchestrator | 00:01:37.081 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.090909 | orchestrator | 00:01:37.081 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.090913 | orchestrator | 00:01:37.081 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.090916 | orchestrator | 00:01:37.081 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.090920 | orchestrator | 00:01:37.081 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.090924 | orchestrator | 00:01:37.081 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.090928 | orchestrator | 00:01:37.081 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.090932 | orchestrator | 00:01:37.081 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.090935 | orchestrator | 00:01:37.081 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.090939 | orchestrator | 00:01:37.081 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.090946 | orchestrator | 00:01:37.081 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.090950 | orchestrator | 00:01:37.081 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.090954 | orchestrator | 00:01:37.081 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.090958 | orchestrator | 00:01:37.081 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090962 | orchestrator | 00:01:37.081 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.090966 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.090970 | orchestrator | 00:01:37.081 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090974 | orchestrator | 00:01:37.081 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.090978 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.090981 | orchestrator | 00:01:37.081 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090985 | orchestrator | 00:01:37.081 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.090989 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.090993 | orchestrator | 00:01:37.081 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.090997 | orchestrator | 00:01:37.081 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.091001 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.091005 | orchestrator | 00:01:37.081 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.091009 | orchestrator | 00:01:37.081 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.091016 | orchestrator | 00:01:37.081 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-04 00:01:37.091020 | orchestrator | 00:01:37.081 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.091024 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.091027 | orchestrator | 00:01:37.081 STDOUT terraform:  } 2025-09-04 00:01:37.091031 | orchestrator | 00:01:37.081 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-04 00:01:37.091035 | orchestrator | 00:01:37.089 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.091039 | orchestrator | 00:01:37.089 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.091043 | orchestrator | 00:01:37.089 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.091047 | orchestrator | 00:01:37.089 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.091051 | orchestrator | 00:01:37.089 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.091055 | orchestrator | 00:01:37.089 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.091058 | orchestrator | 00:01:37.089 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.091062 | orchestrator | 00:01:37.089 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.091066 | orchestrator | 00:01:37.089 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.091070 | orchestrator | 00:01:37.089 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.091074 | orchestrator | 00:01:37.089 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.091077 | orchestrator | 00:01:37.089 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.091081 | orchestrator | 00:01:37.089 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.091085 | orchestrator | 00:01:37.089 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.091089 | orchestrator | 00:01:37.089 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.091093 | orchestrator | 00:01:37.089 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.091101 | orchestrator | 00:01:37.089 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.091105 | orchestrator | 00:01:37.089 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091109 | orchestrator | 00:01:37.089 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.091112 | orchestrator | 00:01:37.089 STDOUT terraform:  } 2025-09-04 00:01:37.091116 | orchestrator | 00:01:37.089 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091120 | orchestrator | 00:01:37.089 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.091124 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091128 | orchestrator | 00:01:37.090 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091131 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.091142 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091146 | orchestrator | 00:01:37.090 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091150 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.091154 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091158 | orchestrator | 00:01:37.090 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.091162 | orchestrator | 00:01:37.090 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.091165 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-04 00:01:37.091169 | orchestrator | 00:01:37.090 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.091173 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091177 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091181 | orchestrator | 00:01:37.090 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-04 00:01:37.091185 | orchestrator | 00:01:37.090 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.091189 | orchestrator | 00:01:37.090 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.091192 | orchestrator | 00:01:37.090 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.091196 | orchestrator | 00:01:37.090 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.091200 | orchestrator | 00:01:37.090 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.091204 | orchestrator | 00:01:37.090 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.091208 | orchestrator | 00:01:37.090 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.091214 | orchestrator | 00:01:37.090 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.091218 | orchestrator | 00:01:37.090 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.091224 | orchestrator | 00:01:37.090 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.091228 | orchestrator | 00:01:37.090 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.091232 | orchestrator | 00:01:37.090 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.091236 | orchestrator | 00:01:37.090 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.091240 | orchestrator | 00:01:37.090 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.091243 | orchestrator | 00:01:37.090 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.091247 | orchestrator | 00:01:37.090 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.091251 | orchestrator | 00:01:37.090 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.091255 | orchestrator | 00:01:37.090 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091258 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.091271 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091275 | orchestrator | 00:01:37.090 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091279 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.091282 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091286 | orchestrator | 00:01:37.090 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091290 | orchestrator | 00:01:37.090 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.091294 | orchestrator | 00:01:37.090 STDOUT terraform:  } 2025-09-04 00:01:37.091298 | orchestrator | 00:01:37.091 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.091302 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.091305 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.091309 | orchestrator | 00:01:37.091 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.091313 | orchestrator | 00:01:37.091 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.091317 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-04 00:01:37.091321 | orchestrator | 00:01:37.091 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.091325 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.091328 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.091332 | orchestrator | 00:01:37.091 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-04 00:01:37.091336 | orchestrator | 00:01:37.091 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.091352 | orchestrator | 00:01:37.091 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.091359 | orchestrator | 00:01:37.091 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.091363 | orchestrator | 00:01:37.091 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.091367 | orchestrator | 00:01:37.091 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.092060 | orchestrator | 00:01:37.091 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.092072 | orchestrator | 00:01:37.091 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.092076 | orchestrator | 00:01:37.091 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.092080 | orchestrator | 00:01:37.091 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.092084 | orchestrator | 00:01:37.091 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.092088 | orchestrator | 00:01:37.091 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.092092 | orchestrator | 00:01:37.091 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.092099 | orchestrator | 00:01:37.091 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.092108 | orchestrator | 00:01:37.091 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.092112 | orchestrator | 00:01:37.091 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.092116 | orchestrator | 00:01:37.091 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.092120 | orchestrator | 00:01:37.091 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.092124 | orchestrator | 00:01:37.091 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.092128 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.092132 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.092136 | orchestrator | 00:01:37.091 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.092140 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.092144 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.092147 | orchestrator | 00:01:37.091 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.092151 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.092155 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.092159 | orchestrator | 00:01:37.091 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.092163 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.092167 | orchestrator | 00:01:37.091 STDOUT terraform:  } 2025-09-04 00:01:37.092171 | orchestrator | 00:01:37.091 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.092174 | orchestrator | 00:01:37.091 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.092178 | orchestrator | 00:01:37.091 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-04 00:01:37.092182 | orchestrator | 00:01:37.091 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.092186 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.092190 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.092197 | orchestrator | 00:01:37.092 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-04 00:01:37.092201 | orchestrator | 00:01:37.092 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-04 00:01:37.092205 | orchestrator | 00:01:37.092 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.092209 | orchestrator | 00:01:37.092 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-04 00:01:37.092215 | orchestrator | 00:01:37.092 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-04 00:01:37.096190 | orchestrator | 00:01:37.092 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.096209 | orchestrator | 00:01:37.092 STDOUT terraform:  + device_id = (known after apply) 2025-09-04 00:01:37.096213 | orchestrator | 00:01:37.092 STDOUT terraform:  + device_owner = (known after apply) 2025-09-04 00:01:37.096217 | orchestrator | 00:01:37.092 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-04 00:01:37.096230 | orchestrator | 00:01:37.092 STDOUT terraform:  + dns_name = (known after apply) 2025-09-04 00:01:37.096233 | orchestrator | 00:01:37.092 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096237 | orchestrator | 00:01:37.092 STDOUT terraform:  + mac_address = (known after apply) 2025-09-04 00:01:37.096241 | orchestrator | 00:01:37.092 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.096245 | orchestrator | 00:01:37.092 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-04 00:01:37.096249 | orchestrator | 00:01:37.092 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-04 00:01:37.096252 | orchestrator | 00:01:37.092 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096256 | orchestrator | 00:01:37.092 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-04 00:01:37.096260 | orchestrator | 00:01:37.092 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096264 | orchestrator | 00:01:37.092 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.096268 | orchestrator | 00:01:37.092 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-04 00:01:37.096272 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096276 | orchestrator | 00:01:37.092 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.096280 | orchestrator | 00:01:37.092 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-04 00:01:37.096284 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096287 | orchestrator | 00:01:37.092 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.096291 | orchestrator | 00:01:37.092 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-04 00:01:37.096295 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096299 | orchestrator | 00:01:37.092 STDOUT terraform:  + allowed_address_pairs { 2025-09-04 00:01:37.096302 | orchestrator | 00:01:37.092 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-04 00:01:37.096306 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096310 | orchestrator | 00:01:37.092 STDOUT terraform:  + binding (known after apply) 2025-09-04 00:01:37.096314 | orchestrator | 00:01:37.092 STDOUT terraform:  + fixed_ip { 2025-09-04 00:01:37.096329 | orchestrator | 00:01:37.092 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-04 00:01:37.096333 | orchestrator | 00:01:37.092 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.096336 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096350 | orchestrator | 00:01:37.092 STDOUT terraform:  } 2025-09-04 00:01:37.096354 | orchestrator | 00:01:37.092 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-04 00:01:37.096359 | orchestrator | 00:01:37.092 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-04 00:01:37.096363 | orchestrator | 00:01:37.093 STDOUT terraform:  + force_destroy = false 2025-09-04 00:01:37.096367 | orchestrator | 00:01:37.093 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096374 | orchestrator | 00:01:37.093 STDOUT terraform:  + port_id = (known after apply) 2025-09-04 00:01:37.096378 | orchestrator | 00:01:37.093 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096382 | orchestrator | 00:01:37.093 STDOUT terraform:  + router_id = (known after apply) 2025-09-04 00:01:37.096386 | orchestrator | 00:01:37.093 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-04 00:01:37.096398 | orchestrator | 00:01:37.093 STDOUT terraform:  } 2025-09-04 00:01:37.096402 | orchestrator | 00:01:37.093 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-04 00:01:37.096406 | orchestrator | 00:01:37.093 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-04 00:01:37.096410 | orchestrator | 00:01:37.093 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-04 00:01:37.096413 | orchestrator | 00:01:37.093 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.096417 | orchestrator | 00:01:37.093 STDOUT terraform:  + availability_zone_hints = [ 2025-09-04 00:01:37.096423 | orchestrator | 00:01:37.093 STDOUT terraform:  + "nova", 2025-09-04 00:01:37.096427 | orchestrator | 00:01:37.093 STDOUT terraform:  ] 2025-09-04 00:01:37.096431 | orchestrator | 00:01:37.093 STDOUT terraform:  + distributed = (known after apply) 2025-09-04 00:01:37.096434 | orchestrator | 00:01:37.093 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-04 00:01:37.096439 | orchestrator | 00:01:37.093 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-04 00:01:37.096451 | orchestrator | 00:01:37.093 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-04 00:01:37.096455 | orchestrator | 00:01:37.093 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096458 | orchestrator | 00:01:37.093 STDOUT terraform:  + name = "testbed" 2025-09-04 00:01:37.096462 | orchestrator | 00:01:37.093 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096466 | orchestrator | 00:01:37.093 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096470 | orchestrator | 00:01:37.093 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-04 00:01:37.096474 | orchestrator | 00:01:37.093 STDOUT terraform:  } 2025-09-04 00:01:37.096478 | orchestrator | 00:01:37.093 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-04 00:01:37.096483 | orchestrator | 00:01:37.093 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-04 00:01:37.096486 | orchestrator | 00:01:37.093 STDOUT terraform:  + description = "ssh" 2025-09-04 00:01:37.096490 | orchestrator | 00:01:37.093 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096494 | orchestrator | 00:01:37.093 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096498 | orchestrator | 00:01:37.093 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096502 | orchestrator | 00:01:37.093 STDOUT terraform:  + port_range_max = 22 2025-09-04 00:01:37.096506 | orchestrator | 00:01:37.093 STDOUT terraform:  + port_range_min = 22 2025-09-04 00:01:37.096513 | orchestrator | 00:01:37.093 STDOUT terraform:  + protocol = "tcp" 2025-09-04 00:01:37.096517 | orchestrator | 00:01:37.093 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096521 | orchestrator | 00:01:37.093 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096524 | orchestrator | 00:01:37.093 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096528 | orchestrator | 00:01:37.093 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.096532 | orchestrator | 00:01:37.093 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096536 | orchestrator | 00:01:37.094 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096540 | orchestrator | 00:01:37.094 STDOUT terraform:  } 2025-09-04 00:01:37.096544 | orchestrator | 00:01:37.094 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-04 00:01:37.096547 | orchestrator | 00:01:37.094 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-04 00:01:37.096556 | orchestrator | 00:01:37.094 STDOUT terraform:  + description = "wireguard" 2025-09-04 00:01:37.096560 | orchestrator | 00:01:37.094 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096564 | orchestrator | 00:01:37.094 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096568 | orchestrator | 00:01:37.094 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096572 | orchestrator | 00:01:37.094 STDOUT terraform:  + port_range_max = 51820 2025-09-04 00:01:37.096576 | orchestrator | 00:01:37.094 STDOUT terraform:  + port_range_min = 51820 2025-09-04 00:01:37.096579 | orchestrator | 00:01:37.094 STDOUT terraform:  + protocol = "udp" 2025-09-04 00:01:37.096583 | orchestrator | 00:01:37.094 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096587 | orchestrator | 00:01:37.094 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096591 | orchestrator | 00:01:37.094 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096595 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.096601 | orchestrator | 00:01:37.095 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096605 | orchestrator | 00:01:37.095 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096609 | orchestrator | 00:01:37.095 STDOUT terraform:  } 2025-09-04 00:01:37.096613 | orchestrator | 00:01:37.095 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-04 00:01:37.096616 | orchestrator | 00:01:37.095 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-04 00:01:37.096620 | orchestrator | 00:01:37.095 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096624 | orchestrator | 00:01:37.095 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096632 | orchestrator | 00:01:37.095 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096636 | orchestrator | 00:01:37.095 STDOUT terraform:  + protocol = "tcp" 2025-09-04 00:01:37.096640 | orchestrator | 00:01:37.095 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096643 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096647 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096651 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-04 00:01:37.096655 | orchestrator | 00:01:37.095 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096658 | orchestrator | 00:01:37.095 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096662 | orchestrator | 00:01:37.095 STDOUT terraform:  } 2025-09-04 00:01:37.096666 | orchestrator | 00:01:37.095 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-04 00:01:37.096670 | orchestrator | 00:01:37.095 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-04 00:01:37.096674 | orchestrator | 00:01:37.095 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096678 | orchestrator | 00:01:37.095 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096682 | orchestrator | 00:01:37.095 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096685 | orchestrator | 00:01:37.095 STDOUT terraform:  + protocol = "udp" 2025-09-04 00:01:37.096689 | orchestrator | 00:01:37.095 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096693 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096701 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096705 | orchestrator | 00:01:37.095 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-04 00:01:37.096709 | orchestrator | 00:01:37.095 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096712 | orchestrator | 00:01:37.095 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096716 | orchestrator | 00:01:37.095 STDOUT terraform:  } 2025-09-04 00:01:37.096720 | orchestrator | 00:01:37.095 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-04 00:01:37.096724 | orchestrator | 00:01:37.095 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-04 00:01:37.096728 | orchestrator | 00:01:37.095 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096732 | orchestrator | 00:01:37.096 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096736 | orchestrator | 00:01:37.096 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096740 | orchestrator | 00:01:37.096 STDOUT terraform:  + protocol = "icmp" 2025-09-04 00:01:37.096750 | orchestrator | 00:01:37.096 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096754 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096758 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096761 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.096765 | orchestrator | 00:01:37.096 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096769 | orchestrator | 00:01:37.096 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096773 | orchestrator | 00:01:37.096 STDOUT terraform:  } 2025-09-04 00:01:37.096777 | orchestrator | 00:01:37.096 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-04 00:01:37.096781 | orchestrator | 00:01:37.096 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-04 00:01:37.096785 | orchestrator | 00:01:37.096 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096789 | orchestrator | 00:01:37.096 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096793 | orchestrator | 00:01:37.096 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096796 | orchestrator | 00:01:37.096 STDOUT terraform:  + protocol = "tcp" 2025-09-04 00:01:37.096800 | orchestrator | 00:01:37.096 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.096804 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.096808 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.096812 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.096816 | orchestrator | 00:01:37.096 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.096819 | orchestrator | 00:01:37.096 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.096823 | orchestrator | 00:01:37.096 STDOUT terraform:  } 2025-09-04 00:01:37.096827 | orchestrator | 00:01:37.096 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-04 00:01:37.096833 | orchestrator | 00:01:37.096 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-04 00:01:37.096837 | orchestrator | 00:01:37.096 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.096841 | orchestrator | 00:01:37.096 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.096845 | orchestrator | 00:01:37.096 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.096851 | orchestrator | 00:01:37.096 STDOUT terraform:  + protocol = "udp" 2025-09-04 00:01:37.097781 | orchestrator | 00:01:37.096 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.097794 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.097798 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.097822 | orchestrator | 00:01:37.096 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.097826 | orchestrator | 00:01:37.096 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.097830 | orchestrator | 00:01:37.096 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.097834 | orchestrator | 00:01:37.097 STDOUT terraform:  } 2025-09-04 00:01:37.097838 | orchestrator | 00:01:37.097 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-04 00:01:37.097842 | orchestrator | 00:01:37.097 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-04 00:01:37.097846 | orchestrator | 00:01:37.097 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.097850 | orchestrator | 00:01:37.097 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.097854 | orchestrator | 00:01:37.097 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.097857 | orchestrator | 00:01:37.097 STDOUT terraform:  + protocol = "icmp" 2025-09-04 00:01:37.097861 | orchestrator | 00:01:37.097 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.097865 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.097869 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.097873 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.097876 | orchestrator | 00:01:37.097 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.097880 | orchestrator | 00:01:37.097 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.097884 | orchestrator | 00:01:37.097 STDOUT terraform:  } 2025-09-04 00:01:37.097888 | orchestrator | 00:01:37.097 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-04 00:01:37.097892 | orchestrator | 00:01:37.097 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-04 00:01:37.097896 | orchestrator | 00:01:37.097 STDOUT terraform:  + description = "vrrp" 2025-09-04 00:01:37.097900 | orchestrator | 00:01:37.097 STDOUT terraform:  + direction = "ingress" 2025-09-04 00:01:37.097903 | orchestrator | 00:01:37.097 STDOUT terraform:  + ethertype = "IPv4" 2025-09-04 00:01:37.097937 | orchestrator | 00:01:37.097 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.097942 | orchestrator | 00:01:37.097 STDOUT terraform:  + protocol = "112" 2025-09-04 00:01:37.097946 | orchestrator | 00:01:37.097 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.097950 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-04 00:01:37.097954 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-04 00:01:37.097957 | orchestrator | 00:01:37.097 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-04 00:01:37.097965 | orchestrator | 00:01:37.097 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-04 00:01:37.097972 | orchestrator | 00:01:37.097 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.097976 | orchestrator | 00:01:37.097 STDOUT terraform:  } 2025-09-04 00:01:37.097980 | orchestrator | 00:01:37.097 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-04 00:01:37.097984 | orchestrator | 00:01:37.097 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-04 00:01:37.097988 | orchestrator | 00:01:37.097 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.097991 | orchestrator | 00:01:37.097 STDOUT terraform:  + description = "management security group" 2025-09-04 00:01:37.097995 | orchestrator | 00:01:37.097 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.098001 | orchestrator | 00:01:37.097 STDOUT terraform:  + name = "testbed-management" 2025-09-04 00:01:37.098008 | orchestrator | 00:01:37.097 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.102818 | orchestrator | 00:01:37.098 STDOUT terraform:  + stateful = (known after apply) 2025-09-04 00:01:37.102850 | orchestrator | 00:01:37.099 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.102855 | orchestrator | 00:01:37.099 STDOUT terraform:  } 2025-09-04 00:01:37.102860 | orchestrator | 00:01:37.099 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-04 00:01:37.102865 | orchestrator | 00:01:37.099 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-04 00:01:37.102876 | orchestrator | 00:01:37.099 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.102881 | orchestrator | 00:01:37.099 STDOUT terraform:  + description = "node security group" 2025-09-04 00:01:37.102886 | orchestrator | 00:01:37.099 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.102891 | orchestrator | 00:01:37.099 STDOUT terraform:  + name = "testbed-node" 2025-09-04 00:01:37.102895 | orchestrator | 00:01:37.099 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.102899 | orchestrator | 00:01:37.099 STDOUT terraform:  + stateful = (known after apply) 2025-09-04 00:01:37.102904 | orchestrator | 00:01:37.099 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.102908 | orchestrator | 00:01:37.099 STDOUT terraform:  } 2025-09-04 00:01:37.102913 | orchestrator | 00:01:37.099 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-04 00:01:37.102918 | orchestrator | 00:01:37.099 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-04 00:01:37.102923 | orchestrator | 00:01:37.099 STDOUT terraform:  + all_tags = (known after apply) 2025-09-04 00:01:37.102928 | orchestrator | 00:01:37.099 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-04 00:01:37.102933 | orchestrator | 00:01:37.099 STDOUT terraform:  + dns_nameservers = [ 2025-09-04 00:01:37.102937 | orchestrator | 00:01:37.099 STDOUT terraform:  + "8.8.8.8", 2025-09-04 00:01:37.102941 | orchestrator | 00:01:37.099 STDOUT terraform:  + "9.9.9.9", 2025-09-04 00:01:37.102951 | orchestrator | 00:01:37.099 STDOUT terraform:  ] 2025-09-04 00:01:37.102955 | orchestrator | 00:01:37.099 STDOUT terraform:  + enable_dhcp = true 2025-09-04 00:01:37.102959 | orchestrator | 00:01:37.099 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-04 00:01:37.102963 | orchestrator | 00:01:37.099 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.102966 | orchestrator | 00:01:37.099 STDOUT terraform:  + ip_version = 4 2025-09-04 00:01:37.102970 | orchestrator | 00:01:37.099 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-04 00:01:37.102974 | orchestrator | 00:01:37.100 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-04 00:01:37.102978 | orchestrator | 00:01:37.100 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-04 00:01:37.102982 | orchestrator | 00:01:37.100 STDOUT terraform:  + network_id = (known after apply) 2025-09-04 00:01:37.102985 | orchestrator | 00:01:37.100 STDOUT terraform:  + no_gateway = false 2025-09-04 00:01:37.102989 | orchestrator | 00:01:37.100 STDOUT terraform:  + region = (known after apply) 2025-09-04 00:01:37.102993 | orchestrator | 00:01:37.100 STDOUT terraform:  + service_types = (known after apply) 2025-09-04 00:01:37.102997 | orchestrator | 00:01:37.100 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-04 00:01:37.103000 | orchestrator | 00:01:37.100 STDOUT terraform:  + allocation_pool { 2025-09-04 00:01:37.103004 | orchestrator | 00:01:37.100 STDOUT terraform:  + end = "192.168.31.250" 2025-09-04 00:01:37.103008 | orchestrator | 00:01:37.100 STDOUT terraform:  + start = "192.168.31.200 2025-09-04 00:01:37.103012 | orchestrator | 00:01:37.100 STDOUT terraform: " 2025-09-04 00:01:37.103016 | orchestrator | 00:01:37.100 STDOUT terraform:  } 2025-09-04 00:01:37.103020 | orchestrator | 00:01:37.100 STDOUT terraform:  } 2025-09-04 00:01:37.103029 | orchestrator | 00:01:37.100 STDOUT terraform:  # terraform_data.image will be created 2025-09-04 00:01:37.103034 | orchestrator | 00:01:37.100 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-04 00:01:37.103037 | orchestrator | 00:01:37.100 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.103041 | orchestrator | 00:01:37.100 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-04 00:01:37.103045 | orchestrator | 00:01:37.100 STDOUT terraform:  + output = (known after apply) 2025-09-04 00:01:37.103049 | orchestrator | 00:01:37.100 STDOUT terraform:  } 2025-09-04 00:01:37.103054 | orchestrator | 00:01:37.100 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-04 00:01:37.103058 | orchestrator | 00:01:37.100 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-04 00:01:37.103062 | orchestrator | 00:01:37.100 STDOUT terraform:  + id = (known after apply) 2025-09-04 00:01:37.103066 | orchestrator | 00:01:37.100 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-04 00:01:37.103070 | orchestrator | 00:01:37.100 STDOUT terraform:  + output = (known after apply) 2025-09-04 00:01:37.103073 | orchestrator | 00:01:37.100 STDOUT terraform:  } 2025-09-04 00:01:37.103077 | orchestrator | 00:01:37.100 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-04 00:01:37.103084 | orchestrator | 00:01:37.100 STDOUT terraform: Changes to Outputs: 2025-09-04 00:01:37.103088 | orchestrator | 00:01:37.100 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-04 00:01:37.103092 | orchestrator | 00:01:37.100 STDOUT terraform:  + private_key = (sensitive value) 2025-09-04 00:01:37.188433 | orchestrator | 00:01:37.188 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-04 00:01:37.188529 | orchestrator | 00:01:37.188 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=0c21b8f9-e6cf-7a2f-3e63-ac67b63c32ba] 2025-09-04 00:01:37.247637 | orchestrator | 00:01:37.247 STDOUT terraform: terraform_data.image: Creating... 2025-09-04 00:01:37.247702 | orchestrator | 00:01:37.247 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=ad2b8e79-3dba-6a31-c993-bb8d7b0237c9] 2025-09-04 00:01:37.264966 | orchestrator | 00:01:37.264 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-04 00:01:37.265021 | orchestrator | 00:01:37.264 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-04 00:01:37.278127 | orchestrator | 00:01:37.277 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-04 00:01:37.278168 | orchestrator | 00:01:37.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-04 00:01:37.278194 | orchestrator | 00:01:37.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-04 00:01:37.278661 | orchestrator | 00:01:37.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-04 00:01:37.278741 | orchestrator | 00:01:37.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-04 00:01:37.278849 | orchestrator | 00:01:37.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-04 00:01:37.279679 | orchestrator | 00:01:37.279 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-04 00:01:37.280089 | orchestrator | 00:01:37.280 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-04 00:01:37.738340 | orchestrator | 00:01:37.737 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-04 00:01:37.743620 | orchestrator | 00:01:37.743 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-04 00:01:37.745434 | orchestrator | 00:01:37.745 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-04 00:01:37.751158 | orchestrator | 00:01:37.750 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-04 00:01:37.816741 | orchestrator | 00:01:37.814 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-04 00:01:37.819406 | orchestrator | 00:01:37.819 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-04 00:01:38.383719 | orchestrator | 00:01:38.383 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=ad969404-0e74-43ff-a3c8-bbc12216e225] 2025-09-04 00:01:38.390378 | orchestrator | 00:01:38.390 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-04 00:01:40.937542 | orchestrator | 00:01:40.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=061ec2a1-a81e-446f-a16e-21a91d07e637] 2025-09-04 00:01:40.945388 | orchestrator | 00:01:40.945 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-04 00:01:40.964653 | orchestrator | 00:01:40.964 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a760aba3-b640-428b-b8ab-7cd9fc53fd0e] 2025-09-04 00:01:40.971668 | orchestrator | 00:01:40.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-04 00:01:41.014665 | orchestrator | 00:01:41.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d9ebab6f-eb3e-4afd-8760-ef197f6a89e1] 2025-09-04 00:01:41.024242 | orchestrator | 00:01:41.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-04 00:01:41.060543 | orchestrator | 00:01:41.060 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=1db7e25a-216c-47db-963d-7c5f28c46cb6] 2025-09-04 00:01:41.066302 | orchestrator | 00:01:41.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-04 00:01:41.066849 | orchestrator | 00:01:41.066 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=9f385429-6afa-4c66-a502-4318c5de8bbc] 2025-09-04 00:01:41.071720 | orchestrator | 00:01:41.071 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-04 00:01:41.085994 | orchestrator | 00:01:41.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=29dd0003-fb59-4bf4-b86e-44c71c2fb42f] 2025-09-04 00:01:41.096306 | orchestrator | 00:01:41.095 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-04 00:01:41.096376 | orchestrator | 00:01:41.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a] 2025-09-04 00:01:41.105593 | orchestrator | 00:01:41.105 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=10337e6d-8da5-41a0-908c-89b7c0023a77] 2025-09-04 00:01:41.106781 | orchestrator | 00:01:41.106 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-04 00:01:41.111616 | orchestrator | 00:01:41.111 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-04 00:01:41.121367 | orchestrator | 00:01:41.121 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=c5e589a5ec1e5bc1ffbcb640d9c7ece0f3b9463a] 2025-09-04 00:01:41.121898 | orchestrator | 00:01:41.121 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=27e1424f2137d3975f0d2ba6f0bb39deb0246aa8] 2025-09-04 00:01:41.131283 | orchestrator | 00:01:41.131 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-04 00:01:41.143487 | orchestrator | 00:01:41.143 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=7a7dbb75-788e-43b1-b83f-869b645534fa] 2025-09-04 00:01:41.805170 | orchestrator | 00:01:41.804 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=139efff4-af39-4a60-9237-537e42bca61b] 2025-09-04 00:01:42.137082 | orchestrator | 00:01:42.136 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=9c4e7c66-f118-4857-95e8-6f12ff300210] 2025-09-04 00:01:42.144877 | orchestrator | 00:01:42.144 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-04 00:01:44.336545 | orchestrator | 00:01:44.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=0d7986e8-ee88-4385-890c-4417efeba472] 2025-09-04 00:01:44.340296 | orchestrator | 00:01:44.339 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=1bb9a345-bda2-451d-96bf-1b31ebdd8b0d] 2025-09-04 00:01:44.626960 | orchestrator | 00:01:44.626 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=35254283-f567-479c-9533-4c72b9fbad1b] 2025-09-04 00:01:44.645417 | orchestrator | 00:01:44.645 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-04 00:01:44.647613 | orchestrator | 00:01:44.647 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-04 00:01:44.648416 | orchestrator | 00:01:44.648 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=2f348311-5206-43c3-98ff-251796244150] 2025-09-04 00:01:44.648845 | orchestrator | 00:01:44.648 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-04 00:01:44.655409 | orchestrator | 00:01:44.655 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=d1acc59b-6cf7-43a3-afb9-a02c934829cb] 2025-09-04 00:01:44.662213 | orchestrator | 00:01:44.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=e0f9fbc0-3dcb-459a-ad8b-9f15e0856598] 2025-09-04 00:01:44.664367 | orchestrator | 00:01:44.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=e94f9961-caee-4d19-af27-42e09fc65b04] 2025-09-04 00:01:44.846132 | orchestrator | 00:01:44.845 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a69ed183-2678-48c7-b33e-f4d64f5a933f] 2025-09-04 00:01:44.857918 | orchestrator | 00:01:44.857 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-04 00:01:44.859927 | orchestrator | 00:01:44.859 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-04 00:01:44.860314 | orchestrator | 00:01:44.860 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-04 00:01:44.866400 | orchestrator | 00:01:44.866 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-04 00:01:44.872490 | orchestrator | 00:01:44.872 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-04 00:01:44.875741 | orchestrator | 00:01:44.875 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-04 00:01:44.878671 | orchestrator | 00:01:44.878 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-04 00:01:44.886867 | orchestrator | 00:01:44.886 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d1760362-c689-48c8-af86-b6b3709c9505] 2025-09-04 00:01:44.888288 | orchestrator | 00:01:44.888 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-04 00:01:44.901362 | orchestrator | 00:01:44.901 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-04 00:01:45.031688 | orchestrator | 00:01:45.031 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=6b14e9a6-888a-4a2e-995f-8497b3025769] 2025-09-04 00:01:45.051167 | orchestrator | 00:01:45.050 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-04 00:01:45.397921 | orchestrator | 00:01:45.397 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c4f2f500-0400-4bbe-8da5-a55520846fcb] 2025-09-04 00:01:45.405272 | orchestrator | 00:01:45.405 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-04 00:01:45.595877 | orchestrator | 00:01:45.595 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=1d0bd3a7-81f6-4403-a6ac-a6e59aa218a2] 2025-09-04 00:01:45.602452 | orchestrator | 00:01:45.602 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-04 00:01:45.603634 | orchestrator | 00:01:45.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=93c66f60-2b49-4f14-aa01-40df32abb045] 2025-09-04 00:01:45.614255 | orchestrator | 00:01:45.614 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-04 00:01:45.664474 | orchestrator | 00:01:45.664 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f5a3cf13-372b-4de0-9020-1db6a0902a10] 2025-09-04 00:01:45.670915 | orchestrator | 00:01:45.670 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-04 00:01:45.715130 | orchestrator | 00:01:45.714 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=4e63dbcf-5550-44a7-907e-6dda2cba6b3f] 2025-09-04 00:01:45.721153 | orchestrator | 00:01:45.721 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-04 00:01:45.801066 | orchestrator | 00:01:45.800 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=801a770a-0901-490c-bf26-a5a95184530f] 2025-09-04 00:01:45.806624 | orchestrator | 00:01:45.806 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-04 00:01:45.966983 | orchestrator | 00:01:45.966 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=070c3f39-c8de-4aa4-9001-a81ebaa74b99] 2025-09-04 00:01:46.056111 | orchestrator | 00:01:46.055 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=098ee1c3-36ea-4a50-b91f-e67bef5c7afa] 2025-09-04 00:01:46.133933 | orchestrator | 00:01:46.133 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=70e66f6e-1b35-49d0-aaa8-d3c277fde280] 2025-09-04 00:01:46.160335 | orchestrator | 00:01:46.160 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=21170a48-1eb9-4859-9ac5-28d4dc1b3160] 2025-09-04 00:01:46.296904 | orchestrator | 00:01:46.296 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=d52d3ef0-a42c-4929-9deb-5ff2b4570254] 2025-09-04 00:01:46.343708 | orchestrator | 00:01:46.343 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=89d75ea8-0c30-4cd0-ab6e-15d1d63b99d7] 2025-09-04 00:01:46.472109 | orchestrator | 00:01:46.471 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=80662ce3-5e77-41a1-81ea-fc2818b2c5db] 2025-09-04 00:01:46.563518 | orchestrator | 00:01:46.563 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=7352b322-0280-4714-9532-82d67c7d1554] 2025-09-04 00:01:46.743624 | orchestrator | 00:01:46.743 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=c31e170a-a9de-40e5-9928-202e4d424176] 2025-09-04 00:01:47.745309 | orchestrator | 00:01:47.738 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=0f5ec3c9-48a2-4c76-abea-98e8b05a35b8] 2025-09-04 00:01:47.760048 | orchestrator | 00:01:47.759 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-04 00:01:47.773491 | orchestrator | 00:01:47.773 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-04 00:01:47.776382 | orchestrator | 00:01:47.776 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-04 00:01:47.780226 | orchestrator | 00:01:47.780 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-04 00:01:47.789622 | orchestrator | 00:01:47.789 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-04 00:01:47.789900 | orchestrator | 00:01:47.789 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-04 00:01:47.793811 | orchestrator | 00:01:47.793 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-04 00:01:49.808205 | orchestrator | 00:01:49.807 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=a325f913-9a21-4c15-b5df-bd045b718301] 2025-09-04 00:01:49.821651 | orchestrator | 00:01:49.821 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-04 00:01:49.824590 | orchestrator | 00:01:49.824 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-04 00:01:49.827574 | orchestrator | 00:01:49.827 STDOUT terraform: local_file.inventory: Creating... 2025-09-04 00:01:49.831204 | orchestrator | 00:01:49.831 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b6e6cefa515aae49256aaf56e9f625d34b65c798] 2025-09-04 00:01:49.836174 | orchestrator | 00:01:49.835 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=086ef3d51266145b142c3ac5d0f84ad9168f6748] 2025-09-04 00:01:51.405491 | orchestrator | 00:01:51.405 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=a325f913-9a21-4c15-b5df-bd045b718301] 2025-09-04 00:01:57.776023 | orchestrator | 00:01:57.775 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-04 00:01:57.779988 | orchestrator | 00:01:57.779 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-04 00:01:57.783145 | orchestrator | 00:01:57.782 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-04 00:01:57.791468 | orchestrator | 00:01:57.791 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-04 00:01:57.791599 | orchestrator | 00:01:57.791 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-04 00:01:57.800779 | orchestrator | 00:01:57.800 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-04 00:02:07.776845 | orchestrator | 00:02:07.776 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-04 00:02:07.780880 | orchestrator | 00:02:07.780 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-04 00:02:07.784126 | orchestrator | 00:02:07.783 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-04 00:02:07.792478 | orchestrator | 00:02:07.792 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-04 00:02:07.792620 | orchestrator | 00:02:07.792 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-04 00:02:07.801869 | orchestrator | 00:02:07.801 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-04 00:02:08.344481 | orchestrator | 00:02:08.344 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=710c77c5-3ef8-49cd-b324-b495f311d06d] 2025-09-04 00:02:08.384874 | orchestrator | 00:02:08.384 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=96fe3614-bdcf-4205-b437-c9d98eaef4b5] 2025-09-04 00:02:17.779232 | orchestrator | 00:02:17.778 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-04 00:02:17.781214 | orchestrator | 00:02:17.781 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-04 00:02:17.793568 | orchestrator | 00:02:17.793 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-04 00:02:17.793698 | orchestrator | 00:02:17.793 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-04 00:02:18.384125 | orchestrator | 00:02:18.383 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=9f785130-41a1-42b6-98cd-1d1c263a3d16] 2025-09-04 00:02:18.545130 | orchestrator | 00:02:18.544 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=103bba96-16b4-48aa-bb8c-b8c5fee50093] 2025-09-04 00:02:18.664418 | orchestrator | 00:02:18.663 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=39f9ddfa-86c1-40f6-8cc9-e568e9e5b5bb] 2025-09-04 00:02:18.821855 | orchestrator | 00:02:18.821 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=f6af1a1d-0b90-426a-8d15-08e0d9b3c239] 2025-09-04 00:02:18.847011 | orchestrator | 00:02:18.844 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-04 00:02:18.852567 | orchestrator | 00:02:18.852 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8045640202008026713] 2025-09-04 00:02:18.855428 | orchestrator | 00:02:18.855 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-04 00:02:18.856033 | orchestrator | 00:02:18.855 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-04 00:02:18.856512 | orchestrator | 00:02:18.856 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-04 00:02:18.858700 | orchestrator | 00:02:18.858 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-04 00:02:18.859422 | orchestrator | 00:02:18.859 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-04 00:02:18.884206 | orchestrator | 00:02:18.884 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-04 00:02:18.893302 | orchestrator | 00:02:18.892 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-04 00:02:18.894672 | orchestrator | 00:02:18.894 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-04 00:02:18.904099 | orchestrator | 00:02:18.904 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-04 00:02:18.904903 | orchestrator | 00:02:18.904 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-04 00:02:22.386864 | orchestrator | 00:02:22.386 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=103bba96-16b4-48aa-bb8c-b8c5fee50093/10337e6d-8da5-41a0-908c-89b7c0023a77] 2025-09-04 00:02:22.405175 | orchestrator | 00:02:22.404 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=96fe3614-bdcf-4205-b437-c9d98eaef4b5/a760aba3-b640-428b-b8ab-7cd9fc53fd0e] 2025-09-04 00:02:22.425136 | orchestrator | 00:02:22.424 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=710c77c5-3ef8-49cd-b324-b495f311d06d/061ec2a1-a81e-446f-a16e-21a91d07e637] 2025-09-04 00:02:22.475411 | orchestrator | 00:02:22.474 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=96fe3614-bdcf-4205-b437-c9d98eaef4b5/bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a] 2025-09-04 00:02:28.518554 | orchestrator | 00:02:28.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=103bba96-16b4-48aa-bb8c-b8c5fee50093/1db7e25a-216c-47db-963d-7c5f28c46cb6] 2025-09-04 00:02:28.536814 | orchestrator | 00:02:28.536 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=710c77c5-3ef8-49cd-b324-b495f311d06d/9f385429-6afa-4c66-a502-4318c5de8bbc] 2025-09-04 00:02:28.558770 | orchestrator | 00:02:28.558 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=103bba96-16b4-48aa-bb8c-b8c5fee50093/29dd0003-fb59-4bf4-b86e-44c71c2fb42f] 2025-09-04 00:02:28.567738 | orchestrator | 00:02:28.567 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=710c77c5-3ef8-49cd-b324-b495f311d06d/7a7dbb75-788e-43b1-b83f-869b645534fa] 2025-09-04 00:02:28.605984 | orchestrator | 00:02:28.605 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=96fe3614-bdcf-4205-b437-c9d98eaef4b5/d9ebab6f-eb3e-4afd-8760-ef197f6a89e1] 2025-09-04 00:02:28.905507 | orchestrator | 00:02:28.905 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-04 00:02:38.905918 | orchestrator | 00:02:38.905 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-04 00:02:39.366333 | orchestrator | 00:02:39.366 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=7bf1726f-e4ed-409c-9e38-72e22805e064] 2025-09-04 00:02:39.387304 | orchestrator | 00:02:39.387 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-04 00:02:39.387399 | orchestrator | 00:02:39.387 STDOUT terraform: Outputs: 2025-09-04 00:02:39.387416 | orchestrator | 00:02:39.387 STDOUT terraform: manager_address = 2025-09-04 00:02:39.387450 | orchestrator | 00:02:39.387 STDOUT terraform: private_key = 2025-09-04 00:02:39.606600 | orchestrator | ok: Runtime: 0:01:08.439506 2025-09-04 00:02:39.641198 | 2025-09-04 00:02:39.641299 | TASK [Create infrastructure (stable)] 2025-09-04 00:02:40.172529 | orchestrator | skipping: Conditional result was False 2025-09-04 00:02:40.190600 | 2025-09-04 00:02:40.190769 | TASK [Fetch manager address] 2025-09-04 00:02:40.594596 | orchestrator | ok 2025-09-04 00:02:40.605048 | 2025-09-04 00:02:40.605168 | TASK [Set manager_host address] 2025-09-04 00:02:40.676152 | orchestrator | ok 2025-09-04 00:02:40.683174 | 2025-09-04 00:02:40.683267 | LOOP [Update ansible collections] 2025-09-04 00:02:41.485681 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-04 00:02:41.486153 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-04 00:02:41.486234 | orchestrator | Starting galaxy collection install process 2025-09-04 00:02:41.486306 | orchestrator | Process install dependency map 2025-09-04 00:02:41.486368 | orchestrator | Starting collection install process 2025-09-04 00:02:41.486429 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-04 00:02:41.486484 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-04 00:02:41.486536 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-04 00:02:41.486627 | orchestrator | ok: Item: commons Runtime: 0:00:00.494601 2025-09-04 00:02:42.277795 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-04 00:02:42.277912 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-04 00:02:42.277944 | orchestrator | Starting galaxy collection install process 2025-09-04 00:02:42.277967 | orchestrator | Process install dependency map 2025-09-04 00:02:42.277987 | orchestrator | Starting collection install process 2025-09-04 00:02:42.278007 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-04 00:02:42.278028 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-04 00:02:42.278047 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-04 00:02:42.278080 | orchestrator | ok: Item: services Runtime: 0:00:00.547083 2025-09-04 00:02:42.298794 | 2025-09-04 00:02:42.298957 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-04 00:02:52.824741 | orchestrator | ok 2025-09-04 00:02:52.834084 | 2025-09-04 00:02:52.834191 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-04 00:03:52.880156 | orchestrator | ok 2025-09-04 00:03:52.891215 | 2025-09-04 00:03:52.891342 | TASK [Fetch manager ssh hostkey] 2025-09-04 00:03:54.461461 | orchestrator | Output suppressed because no_log was given 2025-09-04 00:03:54.475857 | 2025-09-04 00:03:54.476037 | TASK [Get ssh keypair from terraform environment] 2025-09-04 00:03:55.011032 | orchestrator | ok: Runtime: 0:00:00.011536 2025-09-04 00:03:55.027308 | 2025-09-04 00:03:55.027537 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-04 00:03:55.074200 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-04 00:03:55.083476 | 2025-09-04 00:03:55.083602 | TASK [Run manager part 0] 2025-09-04 00:03:55.939560 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-04 00:03:55.984041 | orchestrator | 2025-09-04 00:03:55.984095 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-04 00:03:55.984104 | orchestrator | 2025-09-04 00:03:55.984119 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-04 00:03:57.851150 | orchestrator | ok: [testbed-manager] 2025-09-04 00:03:57.851239 | orchestrator | 2025-09-04 00:03:57.851270 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-04 00:03:57.851280 | orchestrator | 2025-09-04 00:03:57.851291 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:03:59.953669 | orchestrator | ok: [testbed-manager] 2025-09-04 00:03:59.953763 | orchestrator | 2025-09-04 00:03:59.953778 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-04 00:04:00.622226 | orchestrator | ok: [testbed-manager] 2025-09-04 00:04:00.622308 | orchestrator | 2025-09-04 00:04:00.622318 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-04 00:04:00.673054 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.673085 | orchestrator | 2025-09-04 00:04:00.673095 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-04 00:04:00.697203 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.697221 | orchestrator | 2025-09-04 00:04:00.697228 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-04 00:04:00.720609 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.720627 | orchestrator | 2025-09-04 00:04:00.720632 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-04 00:04:00.743690 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.743710 | orchestrator | 2025-09-04 00:04:00.743714 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-04 00:04:00.766595 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.766612 | orchestrator | 2025-09-04 00:04:00.766617 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-04 00:04:00.793454 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.793469 | orchestrator | 2025-09-04 00:04:00.793477 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-04 00:04:00.817545 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:04:00.817568 | orchestrator | 2025-09-04 00:04:00.817574 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-04 00:04:01.561088 | orchestrator | changed: [testbed-manager] 2025-09-04 00:04:01.561182 | orchestrator | 2025-09-04 00:04:01.561197 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-04 00:06:36.743986 | orchestrator | changed: [testbed-manager] 2025-09-04 00:06:36.744076 | orchestrator | 2025-09-04 00:06:36.744096 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-04 00:07:56.267055 | orchestrator | changed: [testbed-manager] 2025-09-04 00:07:56.267171 | orchestrator | 2025-09-04 00:07:56.267189 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-04 00:08:17.112247 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:17.112343 | orchestrator | 2025-09-04 00:08:17.112360 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-04 00:08:27.243397 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:27.243536 | orchestrator | 2025-09-04 00:08:27.243564 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-04 00:08:27.293883 | orchestrator | ok: [testbed-manager] 2025-09-04 00:08:27.293960 | orchestrator | 2025-09-04 00:08:27.293973 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-04 00:08:28.075158 | orchestrator | ok: [testbed-manager] 2025-09-04 00:08:28.075207 | orchestrator | 2025-09-04 00:08:28.075218 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-04 00:08:28.818370 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:28.818482 | orchestrator | 2025-09-04 00:08:28.818500 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-04 00:08:35.551148 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:35.551301 | orchestrator | 2025-09-04 00:08:35.551336 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-04 00:08:42.216606 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:42.216662 | orchestrator | 2025-09-04 00:08:42.216673 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-04 00:08:44.871411 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:44.871507 | orchestrator | 2025-09-04 00:08:44.871516 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-04 00:08:46.729696 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:46.729742 | orchestrator | 2025-09-04 00:08:46.729751 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-04 00:08:47.848383 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-04 00:08:47.848510 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-04 00:08:47.848527 | orchestrator | 2025-09-04 00:08:47.848542 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-04 00:08:47.893706 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-04 00:08:47.893799 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-04 00:08:47.893817 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-04 00:08:47.893832 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-04 00:08:51.299594 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-04 00:08:51.299680 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-04 00:08:51.299693 | orchestrator | 2025-09-04 00:08:51.299705 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-04 00:08:51.875023 | orchestrator | changed: [testbed-manager] 2025-09-04 00:08:51.875122 | orchestrator | 2025-09-04 00:08:51.875138 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-04 00:11:14.164539 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-04 00:11:14.166117 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-04 00:11:14.166139 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-04 00:11:14.166152 | orchestrator | 2025-09-04 00:11:14.166166 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-04 00:11:16.476784 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-04 00:11:16.476867 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-04 00:11:16.476882 | orchestrator | 2025-09-04 00:11:16.476896 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-04 00:11:16.476909 | orchestrator | 2025-09-04 00:11:16.476921 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:11:17.862245 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:17.862328 | orchestrator | 2025-09-04 00:11:17.862347 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-04 00:11:17.909728 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:17.909793 | orchestrator | 2025-09-04 00:11:17.909809 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-04 00:11:17.976748 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:17.976811 | orchestrator | 2025-09-04 00:11:17.976825 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-04 00:11:18.775355 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:18.775443 | orchestrator | 2025-09-04 00:11:18.775486 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-04 00:11:19.517150 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:19.517195 | orchestrator | 2025-09-04 00:11:19.517205 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-04 00:11:20.906985 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-04 00:11:20.907024 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-04 00:11:20.907031 | orchestrator | 2025-09-04 00:11:20.907044 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-04 00:11:22.255621 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:22.255728 | orchestrator | 2025-09-04 00:11:22.255746 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-04 00:11:24.008088 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:11:24.008175 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-04 00:11:24.008189 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:11:24.008200 | orchestrator | 2025-09-04 00:11:24.008213 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-04 00:11:24.066474 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:24.066564 | orchestrator | 2025-09-04 00:11:24.066585 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-04 00:11:24.645447 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:24.645550 | orchestrator | 2025-09-04 00:11:24.645569 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-04 00:11:24.703499 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:24.703582 | orchestrator | 2025-09-04 00:11:24.703599 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-04 00:11:25.570281 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:11:25.570373 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:25.570393 | orchestrator | 2025-09-04 00:11:25.570406 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-04 00:11:25.609437 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:25.609508 | orchestrator | 2025-09-04 00:11:25.609518 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-04 00:11:25.640019 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:25.640078 | orchestrator | 2025-09-04 00:11:25.640089 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-04 00:11:25.677812 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:25.677903 | orchestrator | 2025-09-04 00:11:25.677921 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-04 00:11:25.730655 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:25.730695 | orchestrator | 2025-09-04 00:11:25.730705 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-04 00:11:26.451950 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:26.452027 | orchestrator | 2025-09-04 00:11:26.452043 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-04 00:11:26.452056 | orchestrator | 2025-09-04 00:11:26.452067 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:11:27.919809 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:27.919845 | orchestrator | 2025-09-04 00:11:27.919852 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-04 00:11:28.890933 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:28.890965 | orchestrator | 2025-09-04 00:11:28.890971 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:11:28.890978 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-04 00:11:28.890982 | orchestrator | 2025-09-04 00:11:29.410677 | orchestrator | ok: Runtime: 0:07:33.591715 2025-09-04 00:11:29.429876 | 2025-09-04 00:11:29.430024 | TASK [Point out that the log in on the manager is now possible] 2025-09-04 00:11:29.468468 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-04 00:11:29.477649 | 2025-09-04 00:11:29.477866 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-04 00:11:29.512589 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-04 00:11:29.520690 | 2025-09-04 00:11:29.520807 | TASK [Run manager part 1 + 2] 2025-09-04 00:11:30.350091 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-04 00:11:30.401243 | orchestrator | 2025-09-04 00:11:30.401297 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-04 00:11:30.401306 | orchestrator | 2025-09-04 00:11:30.401322 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:11:33.388303 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:33.388357 | orchestrator | 2025-09-04 00:11:33.388385 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-04 00:11:33.420587 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:33.420639 | orchestrator | 2025-09-04 00:11:33.420670 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-04 00:11:33.468335 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:33.468386 | orchestrator | 2025-09-04 00:11:33.468396 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-04 00:11:33.514054 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:33.514108 | orchestrator | 2025-09-04 00:11:33.514118 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-04 00:11:33.588293 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:33.588347 | orchestrator | 2025-09-04 00:11:33.588358 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-04 00:11:33.644032 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:33.644080 | orchestrator | 2025-09-04 00:11:33.644090 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-04 00:11:33.685792 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-04 00:11:33.685826 | orchestrator | 2025-09-04 00:11:33.685832 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-04 00:11:34.386769 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:34.386816 | orchestrator | 2025-09-04 00:11:34.386827 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-04 00:11:34.432548 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:34.432592 | orchestrator | 2025-09-04 00:11:34.432601 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-04 00:11:35.717920 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:35.717964 | orchestrator | 2025-09-04 00:11:35.717973 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-04 00:11:36.291316 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:36.291359 | orchestrator | 2025-09-04 00:11:36.291368 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-04 00:11:37.407724 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:37.407765 | orchestrator | 2025-09-04 00:11:37.407774 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-04 00:11:54.449786 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:54.449908 | orchestrator | 2025-09-04 00:11:54.449926 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-04 00:11:55.054979 | orchestrator | ok: [testbed-manager] 2025-09-04 00:11:55.055064 | orchestrator | 2025-09-04 00:11:55.055083 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-04 00:11:55.109140 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:11:55.109198 | orchestrator | 2025-09-04 00:11:55.109208 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-04 00:11:56.038898 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:56.038933 | orchestrator | 2025-09-04 00:11:56.038942 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-04 00:11:56.988184 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:56.988226 | orchestrator | 2025-09-04 00:11:56.988236 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-04 00:11:57.558362 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:57.558434 | orchestrator | 2025-09-04 00:11:57.558474 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-04 00:11:57.605194 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-04 00:11:57.605287 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-04 00:11:57.605302 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-04 00:11:57.605314 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-04 00:11:59.355299 | orchestrator | changed: [testbed-manager] 2025-09-04 00:11:59.355418 | orchestrator | 2025-09-04 00:11:59.355434 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-04 00:12:08.605911 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-04 00:12:08.606004 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-04 00:12:08.606045 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-04 00:12:08.606059 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-04 00:12:08.606077 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-04 00:12:08.606087 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-04 00:12:08.606097 | orchestrator | 2025-09-04 00:12:08.606109 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-04 00:12:09.632692 | orchestrator | changed: [testbed-manager] 2025-09-04 00:12:09.632778 | orchestrator | 2025-09-04 00:12:09.632798 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-04 00:12:09.673165 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:12:09.673219 | orchestrator | 2025-09-04 00:12:09.673229 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-04 00:12:12.926090 | orchestrator | changed: [testbed-manager] 2025-09-04 00:12:12.926185 | orchestrator | 2025-09-04 00:12:12.926202 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-04 00:12:12.964176 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:12:12.964246 | orchestrator | 2025-09-04 00:12:12.964260 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-04 00:13:56.447705 | orchestrator | changed: [testbed-manager] 2025-09-04 00:13:56.447801 | orchestrator | 2025-09-04 00:13:56.447820 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-04 00:13:57.579549 | orchestrator | ok: [testbed-manager] 2025-09-04 00:13:57.579589 | orchestrator | 2025-09-04 00:13:57.579596 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:13:57.579603 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-04 00:13:57.579608 | orchestrator | 2025-09-04 00:13:58.142980 | orchestrator | ok: Runtime: 0:02:27.839861 2025-09-04 00:13:58.161161 | 2025-09-04 00:13:58.161318 | TASK [Reboot manager] 2025-09-04 00:13:59.699838 | orchestrator | ok: Runtime: 0:00:00.941521 2025-09-04 00:13:59.714897 | 2025-09-04 00:13:59.715038 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-04 00:14:17.292585 | orchestrator | ok 2025-09-04 00:14:17.303203 | 2025-09-04 00:14:17.303327 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-04 00:15:17.346375 | orchestrator | ok 2025-09-04 00:15:17.357045 | 2025-09-04 00:15:17.357182 | TASK [Deploy manager + bootstrap nodes] 2025-09-04 00:15:19.898601 | orchestrator | 2025-09-04 00:15:19.898809 | orchestrator | # DEPLOY MANAGER 2025-09-04 00:15:19.898835 | orchestrator | 2025-09-04 00:15:19.898849 | orchestrator | + set -e 2025-09-04 00:15:19.898862 | orchestrator | + echo 2025-09-04 00:15:19.898875 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-04 00:15:19.898893 | orchestrator | + echo 2025-09-04 00:15:19.898940 | orchestrator | + cat /opt/manager-vars.sh 2025-09-04 00:15:19.902694 | orchestrator | export NUMBER_OF_NODES=6 2025-09-04 00:15:19.902720 | orchestrator | 2025-09-04 00:15:19.902733 | orchestrator | export CEPH_VERSION=reef 2025-09-04 00:15:19.902745 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-04 00:15:19.902756 | orchestrator | export MANAGER_VERSION=latest 2025-09-04 00:15:19.902778 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-04 00:15:19.902789 | orchestrator | 2025-09-04 00:15:19.902807 | orchestrator | export ARA=false 2025-09-04 00:15:19.902818 | orchestrator | export DEPLOY_MODE=manager 2025-09-04 00:15:19.902834 | orchestrator | export TEMPEST=true 2025-09-04 00:15:19.902845 | orchestrator | export IS_ZUUL=true 2025-09-04 00:15:19.902856 | orchestrator | 2025-09-04 00:15:19.902874 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:15:19.902885 | orchestrator | export EXTERNAL_API=false 2025-09-04 00:15:19.902896 | orchestrator | 2025-09-04 00:15:19.902906 | orchestrator | export IMAGE_USER=ubuntu 2025-09-04 00:15:19.902920 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-04 00:15:19.902931 | orchestrator | 2025-09-04 00:15:19.902942 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-04 00:15:19.902957 | orchestrator | 2025-09-04 00:15:19.902969 | orchestrator | + echo 2025-09-04 00:15:19.902981 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-04 00:15:19.903910 | orchestrator | ++ export INTERACTIVE=false 2025-09-04 00:15:19.903928 | orchestrator | ++ INTERACTIVE=false 2025-09-04 00:15:19.903945 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-04 00:15:19.903959 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-04 00:15:19.904489 | orchestrator | + source /opt/manager-vars.sh 2025-09-04 00:15:19.904506 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-04 00:15:19.904519 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-04 00:15:19.904530 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-04 00:15:19.904550 | orchestrator | ++ CEPH_VERSION=reef 2025-09-04 00:15:19.904562 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-04 00:15:19.904573 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-04 00:15:19.904584 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-04 00:15:19.904627 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-04 00:15:19.904639 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-04 00:15:19.904664 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-04 00:15:19.904675 | orchestrator | ++ export ARA=false 2025-09-04 00:15:19.904686 | orchestrator | ++ ARA=false 2025-09-04 00:15:19.904835 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-04 00:15:19.904851 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-04 00:15:19.904862 | orchestrator | ++ export TEMPEST=true 2025-09-04 00:15:19.904873 | orchestrator | ++ TEMPEST=true 2025-09-04 00:15:19.905025 | orchestrator | ++ export IS_ZUUL=true 2025-09-04 00:15:19.905040 | orchestrator | ++ IS_ZUUL=true 2025-09-04 00:15:19.905051 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:15:19.905062 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:15:19.905073 | orchestrator | ++ export EXTERNAL_API=false 2025-09-04 00:15:19.905084 | orchestrator | ++ EXTERNAL_API=false 2025-09-04 00:15:19.905095 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-04 00:15:19.905105 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-04 00:15:19.905116 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-04 00:15:19.905127 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-04 00:15:19.905137 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-04 00:15:19.905148 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-04 00:15:19.905159 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-04 00:15:19.966596 | orchestrator | + docker version 2025-09-04 00:15:20.240034 | orchestrator | Client: Docker Engine - Community 2025-09-04 00:15:20.240117 | orchestrator | Version: 27.5.1 2025-09-04 00:15:20.240131 | orchestrator | API version: 1.47 2025-09-04 00:15:20.240144 | orchestrator | Go version: go1.22.11 2025-09-04 00:15:20.240154 | orchestrator | Git commit: 9f9e405 2025-09-04 00:15:20.240165 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-04 00:15:20.240177 | orchestrator | OS/Arch: linux/amd64 2025-09-04 00:15:20.240187 | orchestrator | Context: default 2025-09-04 00:15:20.240198 | orchestrator | 2025-09-04 00:15:20.240210 | orchestrator | Server: Docker Engine - Community 2025-09-04 00:15:20.240221 | orchestrator | Engine: 2025-09-04 00:15:20.240232 | orchestrator | Version: 27.5.1 2025-09-04 00:15:20.240243 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-04 00:15:20.240280 | orchestrator | Go version: go1.22.11 2025-09-04 00:15:20.240291 | orchestrator | Git commit: 4c9b3b0 2025-09-04 00:15:20.240301 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-04 00:15:20.240312 | orchestrator | OS/Arch: linux/amd64 2025-09-04 00:15:20.240323 | orchestrator | Experimental: false 2025-09-04 00:15:20.240333 | orchestrator | containerd: 2025-09-04 00:15:20.240344 | orchestrator | Version: 1.7.27 2025-09-04 00:15:20.240355 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-04 00:15:20.240366 | orchestrator | runc: 2025-09-04 00:15:20.240377 | orchestrator | Version: 1.2.5 2025-09-04 00:15:20.240388 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-04 00:15:20.240398 | orchestrator | docker-init: 2025-09-04 00:15:20.240409 | orchestrator | Version: 0.19.0 2025-09-04 00:15:20.240421 | orchestrator | GitCommit: de40ad0 2025-09-04 00:15:20.243595 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-04 00:15:20.253901 | orchestrator | + set -e 2025-09-04 00:15:20.253990 | orchestrator | + source /opt/manager-vars.sh 2025-09-04 00:15:20.254006 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-04 00:15:20.254060 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-04 00:15:20.254072 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-04 00:15:20.254082 | orchestrator | ++ CEPH_VERSION=reef 2025-09-04 00:15:20.254094 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-04 00:15:20.254105 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-04 00:15:20.254116 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-04 00:15:20.254127 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-04 00:15:20.254138 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-04 00:15:20.254149 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-04 00:15:20.254159 | orchestrator | ++ export ARA=false 2025-09-04 00:15:20.254170 | orchestrator | ++ ARA=false 2025-09-04 00:15:20.254181 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-04 00:15:20.254192 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-04 00:15:20.254203 | orchestrator | ++ export TEMPEST=true 2025-09-04 00:15:20.254213 | orchestrator | ++ TEMPEST=true 2025-09-04 00:15:20.254224 | orchestrator | ++ export IS_ZUUL=true 2025-09-04 00:15:20.254234 | orchestrator | ++ IS_ZUUL=true 2025-09-04 00:15:20.254370 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:15:20.254386 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:15:20.254397 | orchestrator | ++ export EXTERNAL_API=false 2025-09-04 00:15:20.254408 | orchestrator | ++ EXTERNAL_API=false 2025-09-04 00:15:20.254419 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-04 00:15:20.254480 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-04 00:15:20.254501 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-04 00:15:20.254511 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-04 00:15:20.254523 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-04 00:15:20.254534 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-04 00:15:20.254545 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-04 00:15:20.254556 | orchestrator | ++ export INTERACTIVE=false 2025-09-04 00:15:20.254566 | orchestrator | ++ INTERACTIVE=false 2025-09-04 00:15:20.254577 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-04 00:15:20.254592 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-04 00:15:20.254661 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-04 00:15:20.254674 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-04 00:15:20.254685 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-04 00:15:20.261827 | orchestrator | + set -e 2025-09-04 00:15:20.262326 | orchestrator | + VERSION=reef 2025-09-04 00:15:20.262910 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-04 00:15:20.269415 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-04 00:15:20.269460 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-04 00:15:20.275177 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-04 00:15:20.281737 | orchestrator | + set -e 2025-09-04 00:15:20.281770 | orchestrator | + VERSION=2024.2 2025-09-04 00:15:20.282769 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-04 00:15:20.286756 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-04 00:15:20.286782 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-04 00:15:20.293148 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-04 00:15:20.293712 | orchestrator | ++ semver latest 7.0.0 2025-09-04 00:15:20.356682 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-04 00:15:20.356746 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-04 00:15:20.356757 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-04 00:15:20.356766 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-04 00:15:20.450878 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-04 00:15:20.452144 | orchestrator | + source /opt/venv/bin/activate 2025-09-04 00:15:20.453547 | orchestrator | ++ deactivate nondestructive 2025-09-04 00:15:20.453563 | orchestrator | ++ '[' -n '' ']' 2025-09-04 00:15:20.453572 | orchestrator | ++ '[' -n '' ']' 2025-09-04 00:15:20.453581 | orchestrator | ++ hash -r 2025-09-04 00:15:20.453654 | orchestrator | ++ '[' -n '' ']' 2025-09-04 00:15:20.453666 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-04 00:15:20.453675 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-04 00:15:20.453684 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-04 00:15:20.453862 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-04 00:15:20.453885 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-04 00:15:20.453949 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-04 00:15:20.453962 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-04 00:15:20.454232 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-04 00:15:20.454250 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-04 00:15:20.454259 | orchestrator | ++ export PATH 2025-09-04 00:15:20.454339 | orchestrator | ++ '[' -n '' ']' 2025-09-04 00:15:20.454516 | orchestrator | ++ '[' -z '' ']' 2025-09-04 00:15:20.454531 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-04 00:15:20.454540 | orchestrator | ++ PS1='(venv) ' 2025-09-04 00:15:20.454548 | orchestrator | ++ export PS1 2025-09-04 00:15:20.454562 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-04 00:15:20.454620 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-04 00:15:20.454686 | orchestrator | ++ hash -r 2025-09-04 00:15:20.454991 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-04 00:15:21.773408 | orchestrator | 2025-09-04 00:15:21.773550 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-04 00:15:21.773576 | orchestrator | 2025-09-04 00:15:21.773595 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-04 00:15:22.379633 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:22.379725 | orchestrator | 2025-09-04 00:15:22.379742 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-04 00:15:23.387484 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:23.387577 | orchestrator | 2025-09-04 00:15:23.387594 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-04 00:15:23.387607 | orchestrator | 2025-09-04 00:15:23.387618 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:15:25.847404 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:25.847538 | orchestrator | 2025-09-04 00:15:25.847555 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-04 00:15:25.915464 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:25.915517 | orchestrator | 2025-09-04 00:15:25.915531 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-04 00:15:26.407079 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:26.407161 | orchestrator | 2025-09-04 00:15:26.407176 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-04 00:15:26.450256 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:26.450325 | orchestrator | 2025-09-04 00:15:26.450339 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-04 00:15:26.796242 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:26.796342 | orchestrator | 2025-09-04 00:15:26.796358 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-04 00:15:26.847402 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:26.847476 | orchestrator | 2025-09-04 00:15:26.847491 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-04 00:15:27.198296 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:27.198368 | orchestrator | 2025-09-04 00:15:27.198380 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-04 00:15:27.328506 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:27.328574 | orchestrator | 2025-09-04 00:15:27.328586 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-04 00:15:27.328597 | orchestrator | 2025-09-04 00:15:27.328611 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:15:29.107010 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:29.107074 | orchestrator | 2025-09-04 00:15:29.107083 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-04 00:15:29.214620 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-04 00:15:29.214677 | orchestrator | 2025-09-04 00:15:29.214684 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-04 00:15:29.269064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-04 00:15:29.269125 | orchestrator | 2025-09-04 00:15:29.269134 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-04 00:15:30.388534 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-04 00:15:30.388624 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-04 00:15:30.388638 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-04 00:15:30.388650 | orchestrator | 2025-09-04 00:15:30.388662 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-04 00:15:32.222764 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-04 00:15:32.222889 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-04 00:15:32.222908 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-04 00:15:32.222919 | orchestrator | 2025-09-04 00:15:32.222931 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-04 00:15:32.863941 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:15:32.864031 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:32.864045 | orchestrator | 2025-09-04 00:15:32.864057 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-04 00:15:33.537110 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:15:33.537216 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:33.537232 | orchestrator | 2025-09-04 00:15:33.537245 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-04 00:15:33.598778 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:33.598804 | orchestrator | 2025-09-04 00:15:33.598816 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-04 00:15:33.976700 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:33.976790 | orchestrator | 2025-09-04 00:15:33.976804 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-04 00:15:34.058728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-04 00:15:34.058816 | orchestrator | 2025-09-04 00:15:34.058829 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-04 00:15:35.113227 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:35.113329 | orchestrator | 2025-09-04 00:15:35.113345 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-04 00:15:35.948199 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:35.948299 | orchestrator | 2025-09-04 00:15:35.948314 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-04 00:15:47.007587 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:47.007683 | orchestrator | 2025-09-04 00:15:47.007701 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-04 00:15:47.069716 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:47.069747 | orchestrator | 2025-09-04 00:15:47.069760 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-04 00:15:47.069771 | orchestrator | 2025-09-04 00:15:47.069782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:15:48.826716 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:48.826810 | orchestrator | 2025-09-04 00:15:48.826855 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-04 00:15:48.961492 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-04 00:15:48.961584 | orchestrator | 2025-09-04 00:15:48.961598 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-04 00:15:49.030508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:15:49.030549 | orchestrator | 2025-09-04 00:15:49.030561 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-04 00:15:51.694233 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:51.694327 | orchestrator | 2025-09-04 00:15:51.694344 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-04 00:15:51.745095 | orchestrator | ok: [testbed-manager] 2025-09-04 00:15:51.745126 | orchestrator | 2025-09-04 00:15:51.745139 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-04 00:15:51.875543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-04 00:15:51.875582 | orchestrator | 2025-09-04 00:15:51.875595 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-04 00:15:54.775616 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-04 00:15:54.775728 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-04 00:15:54.775744 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-04 00:15:54.775756 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-04 00:15:54.775767 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-04 00:15:54.775779 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-04 00:15:54.775790 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-04 00:15:54.775801 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-04 00:15:54.775813 | orchestrator | 2025-09-04 00:15:54.775825 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-04 00:15:55.444760 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:55.444847 | orchestrator | 2025-09-04 00:15:55.444863 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-04 00:15:56.107358 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:56.107503 | orchestrator | 2025-09-04 00:15:56.107519 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-04 00:15:56.196967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-04 00:15:56.197037 | orchestrator | 2025-09-04 00:15:56.197048 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-04 00:15:57.469401 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-04 00:15:57.469541 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-04 00:15:57.469557 | orchestrator | 2025-09-04 00:15:57.469570 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-04 00:15:58.147962 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:58.148063 | orchestrator | 2025-09-04 00:15:58.148078 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-04 00:15:58.210539 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:15:58.210627 | orchestrator | 2025-09-04 00:15:58.210642 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-04 00:15:58.294942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-04 00:15:58.295033 | orchestrator | 2025-09-04 00:15:58.295050 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-04 00:15:58.939119 | orchestrator | changed: [testbed-manager] 2025-09-04 00:15:58.939220 | orchestrator | 2025-09-04 00:15:58.939235 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-04 00:15:59.011468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-04 00:15:59.011570 | orchestrator | 2025-09-04 00:15:59.011584 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-04 00:16:00.443831 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:16:00.443938 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:16:00.443954 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:00.443967 | orchestrator | 2025-09-04 00:16:00.443978 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-04 00:16:01.075626 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:01.075727 | orchestrator | 2025-09-04 00:16:01.075741 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-04 00:16:01.133391 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:16:01.133454 | orchestrator | 2025-09-04 00:16:01.133467 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-04 00:16:01.222590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-04 00:16:01.222618 | orchestrator | 2025-09-04 00:16:01.222629 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-04 00:16:01.762204 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:01.762272 | orchestrator | 2025-09-04 00:16:01.762283 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-04 00:16:02.190819 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:02.190915 | orchestrator | 2025-09-04 00:16:02.190928 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-04 00:16:03.410307 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-04 00:16:03.410479 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-04 00:16:03.410497 | orchestrator | 2025-09-04 00:16:03.410511 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-04 00:16:04.072415 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:04.072550 | orchestrator | 2025-09-04 00:16:04.072565 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-04 00:16:04.479376 | orchestrator | ok: [testbed-manager] 2025-09-04 00:16:04.479516 | orchestrator | 2025-09-04 00:16:04.479534 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-04 00:16:04.851339 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:04.851486 | orchestrator | 2025-09-04 00:16:04.851504 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-04 00:16:04.901663 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:16:04.901695 | orchestrator | 2025-09-04 00:16:04.901708 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-04 00:16:04.979512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-04 00:16:04.979574 | orchestrator | 2025-09-04 00:16:04.979582 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-04 00:16:05.021206 | orchestrator | ok: [testbed-manager] 2025-09-04 00:16:05.021271 | orchestrator | 2025-09-04 00:16:05.021284 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-04 00:16:07.107587 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-04 00:16:07.107689 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-04 00:16:07.107704 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-04 00:16:07.107715 | orchestrator | 2025-09-04 00:16:07.107727 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-04 00:16:07.846694 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:07.846791 | orchestrator | 2025-09-04 00:16:07.846806 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-04 00:16:08.563169 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:08.563285 | orchestrator | 2025-09-04 00:16:08.563301 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-04 00:16:09.284571 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:09.284667 | orchestrator | 2025-09-04 00:16:09.284682 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-04 00:16:09.364503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-04 00:16:09.364564 | orchestrator | 2025-09-04 00:16:09.364575 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-04 00:16:09.402591 | orchestrator | ok: [testbed-manager] 2025-09-04 00:16:09.402630 | orchestrator | 2025-09-04 00:16:09.402641 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-04 00:16:10.114631 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-04 00:16:10.114729 | orchestrator | 2025-09-04 00:16:10.114743 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-04 00:16:10.197781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-04 00:16:10.197810 | orchestrator | 2025-09-04 00:16:10.197822 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-04 00:16:10.915574 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:10.915667 | orchestrator | 2025-09-04 00:16:10.915680 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-04 00:16:11.521921 | orchestrator | ok: [testbed-manager] 2025-09-04 00:16:11.522062 | orchestrator | 2025-09-04 00:16:11.522078 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-04 00:16:11.585491 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:16:11.585563 | orchestrator | 2025-09-04 00:16:11.585576 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-04 00:16:11.633588 | orchestrator | ok: [testbed-manager] 2025-09-04 00:16:11.633652 | orchestrator | 2025-09-04 00:16:11.633665 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-04 00:16:12.499394 | orchestrator | changed: [testbed-manager] 2025-09-04 00:16:12.499519 | orchestrator | 2025-09-04 00:16:12.499536 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-04 00:17:44.517118 | orchestrator | changed: [testbed-manager] 2025-09-04 00:17:44.517227 | orchestrator | 2025-09-04 00:17:44.517243 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-04 00:17:45.542835 | orchestrator | ok: [testbed-manager] 2025-09-04 00:17:45.542930 | orchestrator | 2025-09-04 00:17:45.542944 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-04 00:17:45.604652 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:17:45.604677 | orchestrator | 2025-09-04 00:17:45.604691 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-04 00:18:13.843119 | orchestrator | changed: [testbed-manager] 2025-09-04 00:18:13.843233 | orchestrator | 2025-09-04 00:18:13.843249 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-04 00:18:13.888631 | orchestrator | ok: [testbed-manager] 2025-09-04 00:18:13.888657 | orchestrator | 2025-09-04 00:18:13.888669 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-04 00:18:13.888680 | orchestrator | 2025-09-04 00:18:13.888691 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-04 00:18:13.936977 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:18:13.937035 | orchestrator | 2025-09-04 00:18:13.937051 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-04 00:19:13.994557 | orchestrator | Pausing for 60 seconds 2025-09-04 00:19:13.994679 | orchestrator | changed: [testbed-manager] 2025-09-04 00:19:13.994695 | orchestrator | 2025-09-04 00:19:13.994708 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-04 00:19:18.147632 | orchestrator | changed: [testbed-manager] 2025-09-04 00:19:18.147742 | orchestrator | 2025-09-04 00:19:18.147760 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-04 00:20:20.431007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-04 00:20:20.431128 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-04 00:20:20.431144 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-09-04 00:20:20.431184 | orchestrator | changed: [testbed-manager] 2025-09-04 00:20:20.431198 | orchestrator | 2025-09-04 00:20:20.431210 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-04 00:20:30.494740 | orchestrator | changed: [testbed-manager] 2025-09-04 00:20:30.494885 | orchestrator | 2025-09-04 00:20:30.494904 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-04 00:20:30.576734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-04 00:20:30.576808 | orchestrator | 2025-09-04 00:20:30.576823 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-04 00:20:30.576835 | orchestrator | 2025-09-04 00:20:30.576847 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-04 00:20:30.626399 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:20:30.626429 | orchestrator | 2025-09-04 00:20:30.626440 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:20:30.626453 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-04 00:20:30.626464 | orchestrator | 2025-09-04 00:20:30.730060 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-04 00:20:30.730132 | orchestrator | + deactivate 2025-09-04 00:20:30.730145 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-04 00:20:30.730177 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-04 00:20:30.730187 | orchestrator | + export PATH 2025-09-04 00:20:30.730197 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-04 00:20:30.730207 | orchestrator | + '[' -n '' ']' 2025-09-04 00:20:30.730217 | orchestrator | + hash -r 2025-09-04 00:20:30.730227 | orchestrator | + '[' -n '' ']' 2025-09-04 00:20:30.730237 | orchestrator | + unset VIRTUAL_ENV 2025-09-04 00:20:30.730246 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-04 00:20:30.730256 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-04 00:20:30.730266 | orchestrator | + unset -f deactivate 2025-09-04 00:20:30.730277 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-04 00:20:30.737900 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-04 00:20:30.737941 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-04 00:20:30.737953 | orchestrator | + local max_attempts=60 2025-09-04 00:20:30.737964 | orchestrator | + local name=ceph-ansible 2025-09-04 00:20:30.737975 | orchestrator | + local attempt_num=1 2025-09-04 00:20:30.738829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:20:30.780644 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:20:30.780682 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-04 00:20:30.780694 | orchestrator | + local max_attempts=60 2025-09-04 00:20:30.780705 | orchestrator | + local name=kolla-ansible 2025-09-04 00:20:30.780716 | orchestrator | + local attempt_num=1 2025-09-04 00:20:30.781411 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-04 00:20:30.815684 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:20:30.815724 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-04 00:20:30.815736 | orchestrator | + local max_attempts=60 2025-09-04 00:20:30.815746 | orchestrator | + local name=osism-ansible 2025-09-04 00:20:30.815757 | orchestrator | + local attempt_num=1 2025-09-04 00:20:30.816299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-04 00:20:30.848671 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:20:30.848708 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-04 00:20:30.848719 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-04 00:20:31.587069 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-04 00:20:31.811286 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-04 00:20:31.811433 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-04 00:20:31.811475 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-04 00:20:31.811488 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-09-04 00:20:31.811502 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-09-04 00:20:31.811524 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-09-04 00:20:31.811536 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-09-04 00:20:31.811546 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-09-04 00:20:31.811557 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-09-04 00:20:31.811568 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-09-04 00:20:31.811579 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-09-04 00:20:31.811590 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-09-04 00:20:31.811600 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-04 00:20:31.811611 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-09-04 00:20:31.811622 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-04 00:20:31.811633 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-09-04 00:20:31.819296 | orchestrator | ++ semver latest 7.0.0 2025-09-04 00:20:31.878325 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-04 00:20:31.878412 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-04 00:20:31.878427 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-04 00:20:31.883269 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-04 00:20:44.129782 | orchestrator | 2025-09-04 00:20:44 | INFO  | Task 618d01e3-8f16-41cf-8719-66f7fb5b55f1 (resolvconf) was prepared for execution. 2025-09-04 00:20:44.129896 | orchestrator | 2025-09-04 00:20:44 | INFO  | It takes a moment until task 618d01e3-8f16-41cf-8719-66f7fb5b55f1 (resolvconf) has been started and output is visible here. 2025-09-04 00:20:58.965476 | orchestrator | 2025-09-04 00:20:58.965568 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-04 00:20:58.965584 | orchestrator | 2025-09-04 00:20:58.965617 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:20:58.965630 | orchestrator | Thursday 04 September 2025 00:20:48 +0000 (0:00:00.153) 0:00:00.153 **** 2025-09-04 00:20:58.965641 | orchestrator | ok: [testbed-manager] 2025-09-04 00:20:58.965653 | orchestrator | 2025-09-04 00:20:58.965664 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-04 00:20:58.965675 | orchestrator | Thursday 04 September 2025 00:20:52 +0000 (0:00:04.839) 0:00:04.992 **** 2025-09-04 00:20:58.965686 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:20:58.965697 | orchestrator | 2025-09-04 00:20:58.965707 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-04 00:20:58.965718 | orchestrator | Thursday 04 September 2025 00:20:53 +0000 (0:00:00.070) 0:00:05.062 **** 2025-09-04 00:20:58.965729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-04 00:20:58.965740 | orchestrator | 2025-09-04 00:20:58.965751 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-04 00:20:58.965761 | orchestrator | Thursday 04 September 2025 00:20:53 +0000 (0:00:00.079) 0:00:05.142 **** 2025-09-04 00:20:58.965772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:20:58.965783 | orchestrator | 2025-09-04 00:20:58.965793 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-04 00:20:58.965804 | orchestrator | Thursday 04 September 2025 00:20:53 +0000 (0:00:00.073) 0:00:05.215 **** 2025-09-04 00:20:58.965814 | orchestrator | ok: [testbed-manager] 2025-09-04 00:20:58.965825 | orchestrator | 2025-09-04 00:20:58.965836 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-04 00:20:58.965846 | orchestrator | Thursday 04 September 2025 00:20:54 +0000 (0:00:01.080) 0:00:06.296 **** 2025-09-04 00:20:58.965857 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:20:58.965868 | orchestrator | 2025-09-04 00:20:58.965878 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-04 00:20:58.965889 | orchestrator | Thursday 04 September 2025 00:20:54 +0000 (0:00:00.045) 0:00:06.341 **** 2025-09-04 00:20:58.965899 | orchestrator | ok: [testbed-manager] 2025-09-04 00:20:58.965910 | orchestrator | 2025-09-04 00:20:58.965920 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-04 00:20:58.965931 | orchestrator | Thursday 04 September 2025 00:20:54 +0000 (0:00:00.488) 0:00:06.830 **** 2025-09-04 00:20:58.965941 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:20:58.965952 | orchestrator | 2025-09-04 00:20:58.965963 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-04 00:20:58.965974 | orchestrator | Thursday 04 September 2025 00:20:54 +0000 (0:00:00.093) 0:00:06.923 **** 2025-09-04 00:20:58.965984 | orchestrator | changed: [testbed-manager] 2025-09-04 00:20:58.965995 | orchestrator | 2025-09-04 00:20:58.966008 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-04 00:20:58.966088 | orchestrator | Thursday 04 September 2025 00:20:55 +0000 (0:00:00.523) 0:00:07.446 **** 2025-09-04 00:20:58.966101 | orchestrator | changed: [testbed-manager] 2025-09-04 00:20:58.966113 | orchestrator | 2025-09-04 00:20:58.966127 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-04 00:20:58.966140 | orchestrator | Thursday 04 September 2025 00:20:56 +0000 (0:00:01.063) 0:00:08.510 **** 2025-09-04 00:20:58.966153 | orchestrator | ok: [testbed-manager] 2025-09-04 00:20:58.966165 | orchestrator | 2025-09-04 00:20:58.966177 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-04 00:20:58.966190 | orchestrator | Thursday 04 September 2025 00:20:57 +0000 (0:00:00.987) 0:00:09.497 **** 2025-09-04 00:20:58.966212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-04 00:20:58.966233 | orchestrator | 2025-09-04 00:20:58.966246 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-04 00:20:58.966258 | orchestrator | Thursday 04 September 2025 00:20:57 +0000 (0:00:00.085) 0:00:09.582 **** 2025-09-04 00:20:58.966270 | orchestrator | changed: [testbed-manager] 2025-09-04 00:20:58.966283 | orchestrator | 2025-09-04 00:20:58.966296 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:20:58.966309 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:20:58.966322 | orchestrator | 2025-09-04 00:20:58.966335 | orchestrator | 2025-09-04 00:20:58.966371 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:20:58.966382 | orchestrator | Thursday 04 September 2025 00:20:58 +0000 (0:00:01.164) 0:00:10.747 **** 2025-09-04 00:20:58.966392 | orchestrator | =============================================================================== 2025-09-04 00:20:58.966403 | orchestrator | Gathering Facts --------------------------------------------------------- 4.84s 2025-09-04 00:20:58.966413 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-09-04 00:20:58.966424 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.08s 2025-09-04 00:20:58.966434 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2025-09-04 00:20:58.966445 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-09-04 00:20:58.966455 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-04 00:20:58.966482 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-09-04 00:20:58.966494 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-04 00:20:58.966505 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-04 00:20:58.966516 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-04 00:20:58.966526 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-04 00:20:58.966537 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-04 00:20:58.966547 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-09-04 00:20:59.252906 | orchestrator | + osism apply sshconfig 2025-09-04 00:21:11.371669 | orchestrator | 2025-09-04 00:21:11 | INFO  | Task 013b5848-ad79-4230-ab31-febef68c9777 (sshconfig) was prepared for execution. 2025-09-04 00:21:11.371783 | orchestrator | 2025-09-04 00:21:11 | INFO  | It takes a moment until task 013b5848-ad79-4230-ab31-febef68c9777 (sshconfig) has been started and output is visible here. 2025-09-04 00:21:23.337089 | orchestrator | 2025-09-04 00:21:23.337215 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-04 00:21:23.337233 | orchestrator | 2025-09-04 00:21:23.337245 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-04 00:21:23.337258 | orchestrator | Thursday 04 September 2025 00:21:15 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-04 00:21:23.337269 | orchestrator | ok: [testbed-manager] 2025-09-04 00:21:23.337282 | orchestrator | 2025-09-04 00:21:23.337293 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-04 00:21:23.337304 | orchestrator | Thursday 04 September 2025 00:21:15 +0000 (0:00:00.567) 0:00:00.734 **** 2025-09-04 00:21:23.337315 | orchestrator | changed: [testbed-manager] 2025-09-04 00:21:23.337359 | orchestrator | 2025-09-04 00:21:23.337372 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-04 00:21:23.337384 | orchestrator | Thursday 04 September 2025 00:21:16 +0000 (0:00:00.556) 0:00:01.291 **** 2025-09-04 00:21:23.337395 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-04 00:21:23.337432 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-04 00:21:23.337444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-04 00:21:23.337455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-04 00:21:23.337465 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-04 00:21:23.337493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-04 00:21:23.337504 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-04 00:21:23.337515 | orchestrator | 2025-09-04 00:21:23.337525 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-04 00:21:23.337536 | orchestrator | Thursday 04 September 2025 00:21:22 +0000 (0:00:05.981) 0:00:07.272 **** 2025-09-04 00:21:23.337546 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:21:23.337557 | orchestrator | 2025-09-04 00:21:23.337568 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-04 00:21:23.337578 | orchestrator | Thursday 04 September 2025 00:21:22 +0000 (0:00:00.076) 0:00:07.349 **** 2025-09-04 00:21:23.337589 | orchestrator | changed: [testbed-manager] 2025-09-04 00:21:23.337599 | orchestrator | 2025-09-04 00:21:23.337610 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:21:23.337622 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:21:23.337635 | orchestrator | 2025-09-04 00:21:23.337647 | orchestrator | 2025-09-04 00:21:23.337661 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:21:23.337674 | orchestrator | Thursday 04 September 2025 00:21:23 +0000 (0:00:00.627) 0:00:07.976 **** 2025-09-04 00:21:23.337687 | orchestrator | =============================================================================== 2025-09-04 00:21:23.337700 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.98s 2025-09-04 00:21:23.337712 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-09-04 00:21:23.337725 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-09-04 00:21:23.337737 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2025-09-04 00:21:23.337749 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-04 00:21:23.632521 | orchestrator | + osism apply known-hosts 2025-09-04 00:21:35.611420 | orchestrator | 2025-09-04 00:21:35 | INFO  | Task 9269450f-b852-4579-a3cf-f7e462e9f392 (known-hosts) was prepared for execution. 2025-09-04 00:21:35.611536 | orchestrator | 2025-09-04 00:21:35 | INFO  | It takes a moment until task 9269450f-b852-4579-a3cf-f7e462e9f392 (known-hosts) has been started and output is visible here. 2025-09-04 00:21:52.304299 | orchestrator | 2025-09-04 00:21:52.304460 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-04 00:21:52.304478 | orchestrator | 2025-09-04 00:21:52.304491 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-04 00:21:52.304504 | orchestrator | Thursday 04 September 2025 00:21:39 +0000 (0:00:00.173) 0:00:00.173 **** 2025-09-04 00:21:52.304516 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-04 00:21:52.304527 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-04 00:21:52.304539 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-04 00:21:52.304550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-04 00:21:52.304561 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-04 00:21:52.304572 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-04 00:21:52.304583 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-04 00:21:52.304594 | orchestrator | 2025-09-04 00:21:52.304605 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-04 00:21:52.304617 | orchestrator | Thursday 04 September 2025 00:21:45 +0000 (0:00:05.883) 0:00:06.057 **** 2025-09-04 00:21:52.304651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-04 00:21:52.304664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-04 00:21:52.304675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-04 00:21:52.304686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-04 00:21:52.304697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-04 00:21:52.304719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-04 00:21:52.304731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-04 00:21:52.304741 | orchestrator | 2025-09-04 00:21:52.304752 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.304763 | orchestrator | Thursday 04 September 2025 00:21:45 +0000 (0:00:00.170) 0:00:06.227 **** 2025-09-04 00:21:52.304778 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN9lDmVa/FzGaUkHyUgUyJppBw0vEX5XIvXNQzm9aiymfDn03ia8dJxy4j8Z+9PxO+kT8SOSqiXrWcaUs8ZavrKjMf0FFn0iTJQCKuQ5MK4RPWCBu07TxTzL9m3BaPXSHP+XsY/B7VZKwa+hfVQG48xbWokzC6ovU6zWDkoCgVgb0lvllDmvXi3lukS4MpMv+BjukGOwI3MT0FlBP2+8kztJCFPUOzk8+1YdxADtCYipLJNxkunyntZ2l5Pdxlt0hbjtKPbj4LCj4x9T1nawrBNqu6LagGfLd4IFUj5fpX30XJdF3tWCKrGcWHnfxxl8odlq/+R5lQT3g24V0gyQB7P0sr4+fouCNml2eOpURvzQ9j4IB/lj1BEbWHZYNvS8hgGS2qHpyukoYz7RztNhlRon9tx+pMfH+xs1VueIdpjD5kWFSfBlJ7IQtZz93MJre8V0ye8N/QbishXjxDFuAOO3ecME8U9xOD+pokGBqPki8QUmHAAci3bVm1eUvCRc8=) 2025-09-04 00:21:52.304792 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJjccbe5Ce1JYNgTrr0Y9ZdGClBIGWVoTihzkGnmL7cMam4IZ33FS4izhHMG/8WTP9HR4l31EkUH7q2sYFPSQw=) 2025-09-04 00:21:52.304808 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGnxLjuzM+noqHyh56YOTK+fTj5DfuGIFcxXObWwXv3x) 2025-09-04 00:21:52.304823 | orchestrator | 2025-09-04 00:21:52.304837 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.304849 | orchestrator | Thursday 04 September 2025 00:21:46 +0000 (0:00:01.183) 0:00:07.410 **** 2025-09-04 00:21:52.304862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2d4HJxeMimjTlogeA2GdCyBakzIYGdBKb4ppiYxqDx) 2025-09-04 00:21:52.304908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGvs/zaT3uGJQy2YKxqsUi653jiqyamc1vh2sQ3hyJ8+stBms5JhbQ81a/KDq5jae43iR5npx8JM7ONfeVP7RP4UwhoLQzbs4jO90sIfCzzX9bMJCCjaIRlHTZSk0FgVxQfQbXXrVvFTj/hapQ3sq5YweWTPbAJ4i7uJ7uJFZaq2rm0Jst6+oRQztr8x2u/K9hnj7CWQZV0TxGrJ1mVVI4dPNcK7KiiLuv9lHZE7yMDOyZKnIoUEl3UOGXQGjQdWhqbr06k7HObSZMwGLR/W3IqC0rgHOhGbh7qBSeFG5Jn/USgLCqcTf4QVKZ6iUjyenTv2k40kFO3f2KadsmGc1rUUKlFSns97IbXhY834Vq/29+WFc06ftX12+ec3deGTAgKIfB0ZuodsWGTrIvAD08fksV4OI9EJgNqsTO2y6wxLqgKQxiwgfF0XbvrNw3byVks1YEYogVOwv6/QUxg4RvNSgWMhbIzOB0fsOVlRvloZ8S9I/kAUwXZNJN/AZ8wgE=) 2025-09-04 00:21:52.304921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpSqDDRmSFpwi4AkP3uzk18aZkMNmBBigrleTkq61o97RKfPEBLiJMScKj6GlbBw6wDVIIs5v/Hrwq/nFWuHO8=) 2025-09-04 00:21:52.304943 | orchestrator | 2025-09-04 00:21:52.304956 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.304969 | orchestrator | Thursday 04 September 2025 00:21:47 +0000 (0:00:01.089) 0:00:08.500 **** 2025-09-04 00:21:52.304982 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQ17BBFzjcYA7YDF1c0o0IA1AG2XWAUZwGNRBeKK35A2JeIS//dDRyozUYYDP5jEaC4g6mVhmH062OGaKMzHTU=) 2025-09-04 00:21:52.304995 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+LDurVph5fHQCEyKRdWh1hjPE94AJ/LsolzxowAWsZP5BYziCoyR8mRtnx430zSyekaxSZh/WbKuHbOVWUCI4mnj0/SbeuIeevCOyKwDO61k36rFLbUQdB/KCE+vjujpAfjFZokCxWcY9+mkF3XdZS2pKMruh2XkhM0mZHlaqiL1NSNEY7QmVVW8awY1px1aQ0eEcwb4BV5zqyWhvCNTvkefD0q9iN8QGq8tZpSttBUotXprnN8R7TYSv3P0EyysYmHDvoXdbmq3f7hwHsIsYdrGBFfMCnfN/sPRBk2wnoPEyKrCoTsxFefp7TbtDtzBENbptKrx/gD7ekKOEGa1tiwJ6x95XmiWSyFg9vU9E4bUB4CPqS4mzyOttiAekOUcPj68RrDXF+LmXFAtOWB9l182grMgI52MFkIJIu0F0auob7ALAPDyvIMTiBoPEZe1UIeFzt08TXnFVRrJR2VLSoowHhYCIgECFr8hohJ4AckzayWE69wkFClF0zqE0j7c=) 2025-09-04 00:21:52.305008 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQ299V0hVJYS2hujuoPFy0SHQ456wW63Ts9lq8OaaZu) 2025-09-04 00:21:52.305021 | orchestrator | 2025-09-04 00:21:52.305034 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.305047 | orchestrator | Thursday 04 September 2025 00:21:48 +0000 (0:00:01.067) 0:00:09.567 **** 2025-09-04 00:21:52.305126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3WqMkeFU8dDBVcR9XvR1NMP7MTd9pa7HmT7Cvfp8q5/C3o2/qa3xRUk8oRVM4wSKmP+nkT7M05X9MbcPGveKDF3I4YdS1aJUB1wOpc3ZohOZf2Dtu7d11D/FE3/JZDSvOPh0IrhaXCMMxrnlNfCwZGqUgxqY71ci+LMdV19vbpSUinig6Wtw/bziW3LIG2I82X9BU40KP68WOE5RK6ItbZ7v123LHEgATog7zR/1Vc32s2d2SpgLuBnPPNA6lkoDJ8F/29oa67xmAkdTU9VyKt7VaUTSHo4lTaUXF4h69cyo6zJnaag82z88qK736gLI5gpqabEQSyxEL07uVsdNZawyCuW9Fft/Tv2cHBUz8GiDHiGAmupxzXZVfsGyo78mir6p4b6irsVkjWirWzYUvQfLjen/c7qdK8trGN6r9eRn98ztiiVTd/fWmn73t7imcNLVV+UKxT9DTJcqTyJbY4+KGUkGZ48WIanbpxBQMhO6nGYGFnYK5P0hEDCMGBs=) 2025-09-04 00:21:52.305141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIOXzJGCWYScmgkT85HiYoEMShDqIHJb5J/1qZhWxSLJ2e4dMVl3VT0E8SNd5yCrJAJJY+sp4NKquB2vRARlKF0=) 2025-09-04 00:21:52.305154 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBfC/Bu9ldYEB3iZMVZ+CoIT2lw0sjNVjXtsTH3gllqU) 2025-09-04 00:21:52.305164 | orchestrator | 2025-09-04 00:21:52.305176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.305187 | orchestrator | Thursday 04 September 2025 00:21:50 +0000 (0:00:01.124) 0:00:10.692 **** 2025-09-04 00:21:52.305198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgAAbZoHv13U2ecQaMqafmoUba8eVnkyXUwlKgAV7lDy4eeBqN9qKPYbOkdrKz9YDjIOELo2OuSy0MCDBhhHDf6DtwO3g4r3SusaxVKNOkz5J7Z4hW3nNiSFw9S6YTGZo5RRhRyfgQNH2AiJvc2HUOXAmyJ9w3u6pnI2v7KS4GfBj+sfRHbEdpWNf+LrYtH89XBgZ8WJyeE/b5X/3o95NWAQKG4UXsVm6oraSPtZ37iURu9rt7j8oS3kWqk6xIhyowWHd7lIhEvnR/YjrfX1T7Md9UrR8ei9/e5R1MsDpK8fAx++vMGg5bk3Erofd6DcCzKiwZkhKqGDUAHfiszlTcTHpgMy7ymzvgzwocwcp5wmPqnvt8+DlkrpN18fpQNmqDnoW57R+L/uDveFj3Ys/OHnQnP/P6TUOYTEQW8ZEkZWbrpcBJYvPIzaTKxEV8mHDA1YOEL2wpBf3R0fsZ6YH+Hp1GB5wxLhfduo93ksYlQhKsq4eyUw8smZGoxLTDyEk=) 2025-09-04 00:21:52.305209 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMuLR+VcratkQOeD0TP0mZ8DcFUcLqU7NSzaNTCskcNTwp7fXp5v8WS2T5oGD30usnHmK3XCQ8u8Dt8o4dxP6Yg=) 2025-09-04 00:21:52.305221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdZZDy2iobP3HYqXrlubmowiZFG7N9RsytJHwM2m3RU) 2025-09-04 00:21:52.305239 | orchestrator | 2025-09-04 00:21:52.305249 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:21:52.305260 | orchestrator | Thursday 04 September 2025 00:21:51 +0000 (0:00:01.069) 0:00:11.762 **** 2025-09-04 00:21:52.305278 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSyQfduOI/VTemzt6KYjaXvgXcVmXhVM4XXgm6ytfK4EWOGZJShHM3nwq+azQ9NIgWGJIsrm5b0OMfZ1l3S8Os=) 2025-09-04 00:22:03.480637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILlEpK2hUCZ/LPfbyN9EGKu5o4EjFIRtIgotlGG+2T3s) 2025-09-04 00:22:03.480763 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQjZH2JiTZdEIFDtsWakaYOy4tA9gEq1Vl2Jx5bDHWWxUGsqhiqTBeFZ7a0jUFbHEwY6/ORHBP+fAm2QtwKfFPaWN84xgVukJRWiSNbppLlcJHffB5e/NLapvYar5Bk8Z5ZcHfdd5n+3MY33Te7futxIJFIcZxAdJOl8+J+IHsCE4wkuVTQuUl+7tcslwEGyz0wq/cBFe9i6ez2wAsme4gQiDlRBA4HMGnfhTYfskQFywwdgJPuCoFkeaVfjM/4uR0xr1Kerv1s4y+mhVkrhUvj63tdikBuWWBnQHMmJ/ytIlDu1kvOE55o+1eu3kNYKJECbaQIRN84bdbYezk1QMMosgkZDt4O2VQtBnDEzTZRyvEsarO5yItqgDjSZuumkVOB14s9lix2OYr1rlJQ9TjcoGNY+vbogBY+SGlJYPSGE0kAqjk+egWCVlEeT+eNNDb2gmeB+gdrxsg3ft79ML6d2if7kr4+KZ8bfbIDezKbVJ625wTUZIXoVVcNc8oUjk=) 2025-09-04 00:22:03.480784 | orchestrator | 2025-09-04 00:22:03.480797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:03.480810 | orchestrator | Thursday 04 September 2025 00:21:52 +0000 (0:00:01.140) 0:00:12.903 **** 2025-09-04 00:22:03.480821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJSeWd0V8ACmKbUYjVgNQg1lUSTiMRTI6yeusucY8BkY6C8bqbnJratO4PGeJMr7xABlaO+iuh250XoP9RFAfME=) 2025-09-04 00:22:03.480835 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIELgoWT7hCKJqpOn0GurjKel9UgD0/dOk8rTxhYGuTZ3) 2025-09-04 00:22:03.480846 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNtYp/wt0nVYqZ8Jq1vHu9tDLNASogsNWfhfbS4s7+54D6VIEaA8dyKv9pegcZqoAhfME9xDcS+IeY3nm2lySMyD8vdvHqinnuJtXUmCyqYZyuDV4sQerXLaJpj9J2cD3/6T0SeXgg6Lgf62XADb9jaH+hsINKw/Xai2ZkGRU0AExRzApU66LLIt7EeulW4zAnu8lxsmMa4ictyBLDWtsnpW6+3PgRo+ySwH87zbccdig4yCWvx0H/NDC8SYyqUnbNJIbTtt2Luf4O2pkzlq6oQn4sZzq30o+o7NASb/OmpmDY+yPzdiJUTISSaHksUrvV+kuOP3y5oH4SCb8/NPQUyZQhqYDq4IXCm95+8CCnjiiDhDVmHpM/zVTlKLNkhqoqspz5DMoHaHUW9Js4ui47HAu4XRYkoFZE40YR1uJEISCEvb6NxDiI5Wt0lie06yeD1bUrw+QE0ojY9so8W4pdT7YywjN7/IrEexIs4U28jToYveyIFoF7NSCzUia0ZLM=) 2025-09-04 00:22:03.480858 | orchestrator | 2025-09-04 00:22:03.480869 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-04 00:22:03.480881 | orchestrator | Thursday 04 September 2025 00:21:53 +0000 (0:00:01.149) 0:00:14.052 **** 2025-09-04 00:22:03.480893 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-04 00:22:03.480904 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-04 00:22:03.480915 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-04 00:22:03.480926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-04 00:22:03.480937 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-04 00:22:03.480948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-04 00:22:03.480958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-04 00:22:03.480969 | orchestrator | 2025-09-04 00:22:03.480980 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-04 00:22:03.481003 | orchestrator | Thursday 04 September 2025 00:21:58 +0000 (0:00:05.450) 0:00:19.502 **** 2025-09-04 00:22:03.481015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-04 00:22:03.481028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-04 00:22:03.481062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-04 00:22:03.481073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-04 00:22:03.481084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-04 00:22:03.481095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-04 00:22:03.481105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-04 00:22:03.481116 | orchestrator | 2025-09-04 00:22:03.481146 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:03.481160 | orchestrator | Thursday 04 September 2025 00:21:59 +0000 (0:00:00.168) 0:00:19.671 **** 2025-09-04 00:22:03.481175 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGnxLjuzM+noqHyh56YOTK+fTj5DfuGIFcxXObWwXv3x) 2025-09-04 00:22:03.481190 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDN9lDmVa/FzGaUkHyUgUyJppBw0vEX5XIvXNQzm9aiymfDn03ia8dJxy4j8Z+9PxO+kT8SOSqiXrWcaUs8ZavrKjMf0FFn0iTJQCKuQ5MK4RPWCBu07TxTzL9m3BaPXSHP+XsY/B7VZKwa+hfVQG48xbWokzC6ovU6zWDkoCgVgb0lvllDmvXi3lukS4MpMv+BjukGOwI3MT0FlBP2+8kztJCFPUOzk8+1YdxADtCYipLJNxkunyntZ2l5Pdxlt0hbjtKPbj4LCj4x9T1nawrBNqu6LagGfLd4IFUj5fpX30XJdF3tWCKrGcWHnfxxl8odlq/+R5lQT3g24V0gyQB7P0sr4+fouCNml2eOpURvzQ9j4IB/lj1BEbWHZYNvS8hgGS2qHpyukoYz7RztNhlRon9tx+pMfH+xs1VueIdpjD5kWFSfBlJ7IQtZz93MJre8V0ye8N/QbishXjxDFuAOO3ecME8U9xOD+pokGBqPki8QUmHAAci3bVm1eUvCRc8=) 2025-09-04 00:22:03.481205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJjccbe5Ce1JYNgTrr0Y9ZdGClBIGWVoTihzkGnmL7cMam4IZ33FS4izhHMG/8WTP9HR4l31EkUH7q2sYFPSQw=) 2025-09-04 00:22:03.481218 | orchestrator | 2025-09-04 00:22:03.481230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:03.481244 | orchestrator | Thursday 04 September 2025 00:22:00 +0000 (0:00:01.122) 0:00:20.794 **** 2025-09-04 00:22:03.481256 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN2d4HJxeMimjTlogeA2GdCyBakzIYGdBKb4ppiYxqDx) 2025-09-04 00:22:03.481270 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGvs/zaT3uGJQy2YKxqsUi653jiqyamc1vh2sQ3hyJ8+stBms5JhbQ81a/KDq5jae43iR5npx8JM7ONfeVP7RP4UwhoLQzbs4jO90sIfCzzX9bMJCCjaIRlHTZSk0FgVxQfQbXXrVvFTj/hapQ3sq5YweWTPbAJ4i7uJ7uJFZaq2rm0Jst6+oRQztr8x2u/K9hnj7CWQZV0TxGrJ1mVVI4dPNcK7KiiLuv9lHZE7yMDOyZKnIoUEl3UOGXQGjQdWhqbr06k7HObSZMwGLR/W3IqC0rgHOhGbh7qBSeFG5Jn/USgLCqcTf4QVKZ6iUjyenTv2k40kFO3f2KadsmGc1rUUKlFSns97IbXhY834Vq/29+WFc06ftX12+ec3deGTAgKIfB0ZuodsWGTrIvAD08fksV4OI9EJgNqsTO2y6wxLqgKQxiwgfF0XbvrNw3byVks1YEYogVOwv6/QUxg4RvNSgWMhbIzOB0fsOVlRvloZ8S9I/kAUwXZNJN/AZ8wgE=) 2025-09-04 00:22:03.481284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPpSqDDRmSFpwi4AkP3uzk18aZkMNmBBigrleTkq61o97RKfPEBLiJMScKj6GlbBw6wDVIIs5v/Hrwq/nFWuHO8=) 2025-09-04 00:22:03.481297 | orchestrator | 2025-09-04 00:22:03.481333 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:03.481347 | orchestrator | Thursday 04 September 2025 00:22:01 +0000 (0:00:01.112) 0:00:21.906 **** 2025-09-04 00:22:03.481368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDQ17BBFzjcYA7YDF1c0o0IA1AG2XWAUZwGNRBeKK35A2JeIS//dDRyozUYYDP5jEaC4g6mVhmH062OGaKMzHTU=) 2025-09-04 00:22:03.481382 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+LDurVph5fHQCEyKRdWh1hjPE94AJ/LsolzxowAWsZP5BYziCoyR8mRtnx430zSyekaxSZh/WbKuHbOVWUCI4mnj0/SbeuIeevCOyKwDO61k36rFLbUQdB/KCE+vjujpAfjFZokCxWcY9+mkF3XdZS2pKMruh2XkhM0mZHlaqiL1NSNEY7QmVVW8awY1px1aQ0eEcwb4BV5zqyWhvCNTvkefD0q9iN8QGq8tZpSttBUotXprnN8R7TYSv3P0EyysYmHDvoXdbmq3f7hwHsIsYdrGBFfMCnfN/sPRBk2wnoPEyKrCoTsxFefp7TbtDtzBENbptKrx/gD7ekKOEGa1tiwJ6x95XmiWSyFg9vU9E4bUB4CPqS4mzyOttiAekOUcPj68RrDXF+LmXFAtOWB9l182grMgI52MFkIJIu0F0auob7ALAPDyvIMTiBoPEZe1UIeFzt08TXnFVRrJR2VLSoowHhYCIgECFr8hohJ4AckzayWE69wkFClF0zqE0j7c=) 2025-09-04 00:22:03.481396 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQ299V0hVJYS2hujuoPFy0SHQ456wW63Ts9lq8OaaZu) 2025-09-04 00:22:03.481408 | orchestrator | 2025-09-04 00:22:03.481421 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:03.481435 | orchestrator | Thursday 04 September 2025 00:22:02 +0000 (0:00:01.109) 0:00:23.015 **** 2025-09-04 00:22:03.481457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIOXzJGCWYScmgkT85HiYoEMShDqIHJb5J/1qZhWxSLJ2e4dMVl3VT0E8SNd5yCrJAJJY+sp4NKquB2vRARlKF0=) 2025-09-04 00:22:03.481471 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBfC/Bu9ldYEB3iZMVZ+CoIT2lw0sjNVjXtsTH3gllqU) 2025-09-04 00:22:03.481504 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo3WqMkeFU8dDBVcR9XvR1NMP7MTd9pa7HmT7Cvfp8q5/C3o2/qa3xRUk8oRVM4wSKmP+nkT7M05X9MbcPGveKDF3I4YdS1aJUB1wOpc3ZohOZf2Dtu7d11D/FE3/JZDSvOPh0IrhaXCMMxrnlNfCwZGqUgxqY71ci+LMdV19vbpSUinig6Wtw/bziW3LIG2I82X9BU40KP68WOE5RK6ItbZ7v123LHEgATog7zR/1Vc32s2d2SpgLuBnPPNA6lkoDJ8F/29oa67xmAkdTU9VyKt7VaUTSHo4lTaUXF4h69cyo6zJnaag82z88qK736gLI5gpqabEQSyxEL07uVsdNZawyCuW9Fft/Tv2cHBUz8GiDHiGAmupxzXZVfsGyo78mir6p4b6irsVkjWirWzYUvQfLjen/c7qdK8trGN6r9eRn98ztiiVTd/fWmn73t7imcNLVV+UKxT9DTJcqTyJbY4+KGUkGZ48WIanbpxBQMhO6nGYGFnYK5P0hEDCMGBs=) 2025-09-04 00:22:07.823402 | orchestrator | 2025-09-04 00:22:07.823511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:07.823528 | orchestrator | Thursday 04 September 2025 00:22:03 +0000 (0:00:01.062) 0:00:24.078 **** 2025-09-04 00:22:07.823543 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgAAbZoHv13U2ecQaMqafmoUba8eVnkyXUwlKgAV7lDy4eeBqN9qKPYbOkdrKz9YDjIOELo2OuSy0MCDBhhHDf6DtwO3g4r3SusaxVKNOkz5J7Z4hW3nNiSFw9S6YTGZo5RRhRyfgQNH2AiJvc2HUOXAmyJ9w3u6pnI2v7KS4GfBj+sfRHbEdpWNf+LrYtH89XBgZ8WJyeE/b5X/3o95NWAQKG4UXsVm6oraSPtZ37iURu9rt7j8oS3kWqk6xIhyowWHd7lIhEvnR/YjrfX1T7Md9UrR8ei9/e5R1MsDpK8fAx++vMGg5bk3Erofd6DcCzKiwZkhKqGDUAHfiszlTcTHpgMy7ymzvgzwocwcp5wmPqnvt8+DlkrpN18fpQNmqDnoW57R+L/uDveFj3Ys/OHnQnP/P6TUOYTEQW8ZEkZWbrpcBJYvPIzaTKxEV8mHDA1YOEL2wpBf3R0fsZ6YH+Hp1GB5wxLhfduo93ksYlQhKsq4eyUw8smZGoxLTDyEk=) 2025-09-04 00:22:07.823558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPdZZDy2iobP3HYqXrlubmowiZFG7N9RsytJHwM2m3RU) 2025-09-04 00:22:07.823573 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMuLR+VcratkQOeD0TP0mZ8DcFUcLqU7NSzaNTCskcNTwp7fXp5v8WS2T5oGD30usnHmK3XCQ8u8Dt8o4dxP6Yg=) 2025-09-04 00:22:07.823587 | orchestrator | 2025-09-04 00:22:07.823598 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:07.823609 | orchestrator | Thursday 04 September 2025 00:22:04 +0000 (0:00:01.117) 0:00:25.195 **** 2025-09-04 00:22:07.823620 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNSyQfduOI/VTemzt6KYjaXvgXcVmXhVM4XXgm6ytfK4EWOGZJShHM3nwq+azQ9NIgWGJIsrm5b0OMfZ1l3S8Os=) 2025-09-04 00:22:07.823657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQjZH2JiTZdEIFDtsWakaYOy4tA9gEq1Vl2Jx5bDHWWxUGsqhiqTBeFZ7a0jUFbHEwY6/ORHBP+fAm2QtwKfFPaWN84xgVukJRWiSNbppLlcJHffB5e/NLapvYar5Bk8Z5ZcHfdd5n+3MY33Te7futxIJFIcZxAdJOl8+J+IHsCE4wkuVTQuUl+7tcslwEGyz0wq/cBFe9i6ez2wAsme4gQiDlRBA4HMGnfhTYfskQFywwdgJPuCoFkeaVfjM/4uR0xr1Kerv1s4y+mhVkrhUvj63tdikBuWWBnQHMmJ/ytIlDu1kvOE55o+1eu3kNYKJECbaQIRN84bdbYezk1QMMosgkZDt4O2VQtBnDEzTZRyvEsarO5yItqgDjSZuumkVOB14s9lix2OYr1rlJQ9TjcoGNY+vbogBY+SGlJYPSGE0kAqjk+egWCVlEeT+eNNDb2gmeB+gdrxsg3ft79ML6d2if7kr4+KZ8bfbIDezKbVJ625wTUZIXoVVcNc8oUjk=) 2025-09-04 00:22:07.823669 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILlEpK2hUCZ/LPfbyN9EGKu5o4EjFIRtIgotlGG+2T3s) 2025-09-04 00:22:07.823680 | orchestrator | 2025-09-04 00:22:07.823691 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-04 00:22:07.823701 | orchestrator | Thursday 04 September 2025 00:22:05 +0000 (0:00:01.053) 0:00:26.249 **** 2025-09-04 00:22:07.823712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJSeWd0V8ACmKbUYjVgNQg1lUSTiMRTI6yeusucY8BkY6C8bqbnJratO4PGeJMr7xABlaO+iuh250XoP9RFAfME=) 2025-09-04 00:22:07.823723 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNtYp/wt0nVYqZ8Jq1vHu9tDLNASogsNWfhfbS4s7+54D6VIEaA8dyKv9pegcZqoAhfME9xDcS+IeY3nm2lySMyD8vdvHqinnuJtXUmCyqYZyuDV4sQerXLaJpj9J2cD3/6T0SeXgg6Lgf62XADb9jaH+hsINKw/Xai2ZkGRU0AExRzApU66LLIt7EeulW4zAnu8lxsmMa4ictyBLDWtsnpW6+3PgRo+ySwH87zbccdig4yCWvx0H/NDC8SYyqUnbNJIbTtt2Luf4O2pkzlq6oQn4sZzq30o+o7NASb/OmpmDY+yPzdiJUTISSaHksUrvV+kuOP3y5oH4SCb8/NPQUyZQhqYDq4IXCm95+8CCnjiiDhDVmHpM/zVTlKLNkhqoqspz5DMoHaHUW9Js4ui47HAu4XRYkoFZE40YR1uJEISCEvb6NxDiI5Wt0lie06yeD1bUrw+QE0ojY9so8W4pdT7YywjN7/IrEexIs4U28jToYveyIFoF7NSCzUia0ZLM=) 2025-09-04 00:22:07.823735 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIELgoWT7hCKJqpOn0GurjKel9UgD0/dOk8rTxhYGuTZ3) 2025-09-04 00:22:07.823746 | orchestrator | 2025-09-04 00:22:07.823757 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-04 00:22:07.823768 | orchestrator | Thursday 04 September 2025 00:22:06 +0000 (0:00:01.075) 0:00:27.325 **** 2025-09-04 00:22:07.823780 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-04 00:22:07.823791 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-04 00:22:07.823802 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-04 00:22:07.823812 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-04 00:22:07.823823 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-04 00:22:07.823833 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-04 00:22:07.823845 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-04 00:22:07.823856 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:22:07.823867 | orchestrator | 2025-09-04 00:22:07.823896 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-04 00:22:07.823910 | orchestrator | Thursday 04 September 2025 00:22:06 +0000 (0:00:00.173) 0:00:27.498 **** 2025-09-04 00:22:07.823923 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:22:07.823936 | orchestrator | 2025-09-04 00:22:07.823949 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-04 00:22:07.823963 | orchestrator | Thursday 04 September 2025 00:22:06 +0000 (0:00:00.071) 0:00:27.569 **** 2025-09-04 00:22:07.823975 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:22:07.823988 | orchestrator | 2025-09-04 00:22:07.824000 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-04 00:22:07.824013 | orchestrator | Thursday 04 September 2025 00:22:07 +0000 (0:00:00.068) 0:00:27.638 **** 2025-09-04 00:22:07.824033 | orchestrator | changed: [testbed-manager] 2025-09-04 00:22:07.824045 | orchestrator | 2025-09-04 00:22:07.824058 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:22:07.824072 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:22:07.824086 | orchestrator | 2025-09-04 00:22:07.824099 | orchestrator | 2025-09-04 00:22:07.824130 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:22:07.824144 | orchestrator | Thursday 04 September 2025 00:22:07 +0000 (0:00:00.520) 0:00:28.158 **** 2025-09-04 00:22:07.824156 | orchestrator | =============================================================================== 2025-09-04 00:22:07.824169 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.88s 2025-09-04 00:22:07.824182 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.45s 2025-09-04 00:22:07.824196 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-09-04 00:22:07.824209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-09-04 00:22:07.824222 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-09-04 00:22:07.824236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-04 00:22:07.824249 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-04 00:22:07.824260 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-04 00:22:07.824271 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-04 00:22:07.824281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-04 00:22:07.824292 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-04 00:22:07.824303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-04 00:22:07.824334 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-04 00:22:07.824346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-04 00:22:07.824356 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-04 00:22:07.824367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-04 00:22:07.824378 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.52s 2025-09-04 00:22:07.824389 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-04 00:22:07.824399 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-04 00:22:07.824411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-04 00:22:08.128365 | orchestrator | + osism apply squid 2025-09-04 00:22:20.328116 | orchestrator | 2025-09-04 00:22:20 | INFO  | Task ad7818ce-76ed-4f32-8c09-696c46b881db (squid) was prepared for execution. 2025-09-04 00:22:20.328257 | orchestrator | 2025-09-04 00:22:20 | INFO  | It takes a moment until task ad7818ce-76ed-4f32-8c09-696c46b881db (squid) has been started and output is visible here. 2025-09-04 00:24:17.402372 | orchestrator | 2025-09-04 00:24:17.402493 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-04 00:24:17.402511 | orchestrator | 2025-09-04 00:24:17.402524 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-04 00:24:17.402536 | orchestrator | Thursday 04 September 2025 00:22:24 +0000 (0:00:00.179) 0:00:00.179 **** 2025-09-04 00:24:17.402547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:24:17.402559 | orchestrator | 2025-09-04 00:24:17.402597 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-04 00:24:17.402608 | orchestrator | Thursday 04 September 2025 00:22:24 +0000 (0:00:00.098) 0:00:00.278 **** 2025-09-04 00:24:17.402619 | orchestrator | ok: [testbed-manager] 2025-09-04 00:24:17.402631 | orchestrator | 2025-09-04 00:24:17.402642 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-04 00:24:17.402653 | orchestrator | Thursday 04 September 2025 00:22:26 +0000 (0:00:01.714) 0:00:01.993 **** 2025-09-04 00:24:17.402664 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-04 00:24:17.402675 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-04 00:24:17.402686 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-04 00:24:17.402697 | orchestrator | 2025-09-04 00:24:17.402708 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-04 00:24:17.402719 | orchestrator | Thursday 04 September 2025 00:22:27 +0000 (0:00:01.271) 0:00:03.264 **** 2025-09-04 00:24:17.402729 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-04 00:24:17.402740 | orchestrator | 2025-09-04 00:24:17.402751 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-04 00:24:17.402762 | orchestrator | Thursday 04 September 2025 00:22:28 +0000 (0:00:01.162) 0:00:04.426 **** 2025-09-04 00:24:17.402773 | orchestrator | ok: [testbed-manager] 2025-09-04 00:24:17.402783 | orchestrator | 2025-09-04 00:24:17.402794 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-04 00:24:17.402805 | orchestrator | Thursday 04 September 2025 00:22:29 +0000 (0:00:00.394) 0:00:04.821 **** 2025-09-04 00:24:17.402816 | orchestrator | changed: [testbed-manager] 2025-09-04 00:24:17.402826 | orchestrator | 2025-09-04 00:24:17.402837 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-04 00:24:17.402848 | orchestrator | Thursday 04 September 2025 00:22:30 +0000 (0:00:00.950) 0:00:05.772 **** 2025-09-04 00:24:17.402861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-04 00:24:17.402874 | orchestrator | ok: [testbed-manager] 2025-09-04 00:24:17.402887 | orchestrator | 2025-09-04 00:24:17.402899 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-04 00:24:17.402911 | orchestrator | Thursday 04 September 2025 00:23:04 +0000 (0:00:33.837) 0:00:39.609 **** 2025-09-04 00:24:17.402923 | orchestrator | changed: [testbed-manager] 2025-09-04 00:24:17.402936 | orchestrator | 2025-09-04 00:24:17.402949 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-04 00:24:17.402962 | orchestrator | Thursday 04 September 2025 00:23:16 +0000 (0:00:12.124) 0:00:51.734 **** 2025-09-04 00:24:17.402975 | orchestrator | Pausing for 60 seconds 2025-09-04 00:24:17.402987 | orchestrator | changed: [testbed-manager] 2025-09-04 00:24:17.402999 | orchestrator | 2025-09-04 00:24:17.403012 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-04 00:24:17.403025 | orchestrator | Thursday 04 September 2025 00:24:16 +0000 (0:01:00.074) 0:01:51.808 **** 2025-09-04 00:24:17.403037 | orchestrator | ok: [testbed-manager] 2025-09-04 00:24:17.403049 | orchestrator | 2025-09-04 00:24:17.403062 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-04 00:24:17.403074 | orchestrator | Thursday 04 September 2025 00:24:16 +0000 (0:00:00.064) 0:01:51.873 **** 2025-09-04 00:24:17.403087 | orchestrator | changed: [testbed-manager] 2025-09-04 00:24:17.403100 | orchestrator | 2025-09-04 00:24:17.403112 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:24:17.403124 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:24:17.403136 | orchestrator | 2025-09-04 00:24:17.403148 | orchestrator | 2025-09-04 00:24:17.403161 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:24:17.403173 | orchestrator | Thursday 04 September 2025 00:24:17 +0000 (0:00:00.710) 0:01:52.583 **** 2025-09-04 00:24:17.403194 | orchestrator | =============================================================================== 2025-09-04 00:24:17.403207 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-04 00:24:17.403218 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.84s 2025-09-04 00:24:17.403228 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.12s 2025-09-04 00:24:17.403239 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.71s 2025-09-04 00:24:17.403250 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2025-09-04 00:24:17.403260 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2025-09-04 00:24:17.403271 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-09-04 00:24:17.403299 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.71s 2025-09-04 00:24:17.403311 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-09-04 00:24:17.403322 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-04 00:24:17.403332 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-04 00:24:17.726929 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-04 00:24:17.727590 | orchestrator | ++ semver latest 9.0.0 2025-09-04 00:24:17.794239 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-04 00:24:17.794361 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-04 00:24:17.796274 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-04 00:24:29.885341 | orchestrator | 2025-09-04 00:24:29 | INFO  | Task 108c63d2-921c-4192-aabb-acd84f47dadb (operator) was prepared for execution. 2025-09-04 00:24:29.885448 | orchestrator | 2025-09-04 00:24:29 | INFO  | It takes a moment until task 108c63d2-921c-4192-aabb-acd84f47dadb (operator) has been started and output is visible here. 2025-09-04 00:24:46.316046 | orchestrator | 2025-09-04 00:24:46.316168 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-04 00:24:46.316186 | orchestrator | 2025-09-04 00:24:46.316198 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-04 00:24:46.316210 | orchestrator | Thursday 04 September 2025 00:24:33 +0000 (0:00:00.150) 0:00:00.150 **** 2025-09-04 00:24:46.316221 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:24:46.316234 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:24:46.316244 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:24:46.316255 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:24:46.316265 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:24:46.316276 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:24:46.316313 | orchestrator | 2025-09-04 00:24:46.316324 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-04 00:24:46.316335 | orchestrator | Thursday 04 September 2025 00:24:37 +0000 (0:00:03.663) 0:00:03.813 **** 2025-09-04 00:24:46.316346 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:24:46.316358 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:24:46.316369 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:24:46.316380 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:24:46.316391 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:24:46.316401 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:24:46.316412 | orchestrator | 2025-09-04 00:24:46.316423 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-04 00:24:46.316434 | orchestrator | 2025-09-04 00:24:46.316445 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-04 00:24:46.316457 | orchestrator | Thursday 04 September 2025 00:24:38 +0000 (0:00:00.931) 0:00:04.745 **** 2025-09-04 00:24:46.316467 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:24:46.316478 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:24:46.316489 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:24:46.316500 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:24:46.316510 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:24:46.316546 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:24:46.316557 | orchestrator | 2025-09-04 00:24:46.316568 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-04 00:24:46.316581 | orchestrator | Thursday 04 September 2025 00:24:38 +0000 (0:00:00.158) 0:00:04.904 **** 2025-09-04 00:24:46.316594 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:24:46.316606 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:24:46.316618 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:24:46.316631 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:24:46.316643 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:24:46.316656 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:24:46.316668 | orchestrator | 2025-09-04 00:24:46.316682 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-04 00:24:46.316694 | orchestrator | Thursday 04 September 2025 00:24:38 +0000 (0:00:00.177) 0:00:05.081 **** 2025-09-04 00:24:46.316706 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:46.316719 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:46.316732 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:46.316744 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:46.316755 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:46.316766 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:46.316777 | orchestrator | 2025-09-04 00:24:46.316788 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-04 00:24:46.316799 | orchestrator | Thursday 04 September 2025 00:24:39 +0000 (0:00:00.667) 0:00:05.748 **** 2025-09-04 00:24:46.316810 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:46.316820 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:46.316831 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:46.316842 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:46.316853 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:46.316864 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:46.316874 | orchestrator | 2025-09-04 00:24:46.316885 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-04 00:24:46.316896 | orchestrator | Thursday 04 September 2025 00:24:40 +0000 (0:00:00.763) 0:00:06.512 **** 2025-09-04 00:24:46.316908 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-04 00:24:46.316919 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-04 00:24:46.316929 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-04 00:24:46.316940 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-04 00:24:46.316951 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-04 00:24:46.316962 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-04 00:24:46.316972 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-04 00:24:46.316983 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-04 00:24:46.316994 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-04 00:24:46.317005 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-04 00:24:46.317016 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-04 00:24:46.317026 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-04 00:24:46.317037 | orchestrator | 2025-09-04 00:24:46.317048 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-04 00:24:46.317059 | orchestrator | Thursday 04 September 2025 00:24:41 +0000 (0:00:01.196) 0:00:07.709 **** 2025-09-04 00:24:46.317070 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:46.317081 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:46.317092 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:46.317102 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:46.317113 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:46.317124 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:46.317135 | orchestrator | 2025-09-04 00:24:46.317146 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-04 00:24:46.317158 | orchestrator | Thursday 04 September 2025 00:24:42 +0000 (0:00:01.267) 0:00:08.977 **** 2025-09-04 00:24:46.317177 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-04 00:24:46.317188 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-04 00:24:46.317199 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-04 00:24:46.317210 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317238 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317249 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317260 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317271 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317301 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-04 00:24:46.317313 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317323 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317334 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317345 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317356 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317367 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-04 00:24:46.317378 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317389 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317399 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317410 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317421 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317432 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-04 00:24:46.317443 | orchestrator | 2025-09-04 00:24:46.317454 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-04 00:24:46.317466 | orchestrator | Thursday 04 September 2025 00:24:44 +0000 (0:00:01.263) 0:00:10.240 **** 2025-09-04 00:24:46.317477 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:46.317488 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:46.317498 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:46.317509 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:46.317520 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:46.317530 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:46.317541 | orchestrator | 2025-09-04 00:24:46.317552 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-04 00:24:46.317563 | orchestrator | Thursday 04 September 2025 00:24:44 +0000 (0:00:00.178) 0:00:10.419 **** 2025-09-04 00:24:46.317574 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:46.317585 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:46.317595 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:46.317606 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:46.317616 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:46.317627 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:46.317638 | orchestrator | 2025-09-04 00:24:46.317649 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-04 00:24:46.317660 | orchestrator | Thursday 04 September 2025 00:24:44 +0000 (0:00:00.600) 0:00:11.020 **** 2025-09-04 00:24:46.317670 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:46.317681 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:46.317692 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:46.317703 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:46.317713 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:46.317724 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:46.317742 | orchestrator | 2025-09-04 00:24:46.317753 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-04 00:24:46.317764 | orchestrator | Thursday 04 September 2025 00:24:44 +0000 (0:00:00.200) 0:00:11.221 **** 2025-09-04 00:24:46.317778 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-04 00:24:46.317790 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:46.317800 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 00:24:46.317811 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 00:24:46.317822 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:46.317833 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:46.317844 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-04 00:24:46.317854 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:46.317865 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 00:24:46.317876 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:46.317887 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 00:24:46.317898 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:46.317908 | orchestrator | 2025-09-04 00:24:46.318108 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-04 00:24:46.318131 | orchestrator | Thursday 04 September 2025 00:24:45 +0000 (0:00:00.737) 0:00:11.958 **** 2025-09-04 00:24:46.318142 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:46.318152 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:46.318163 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:46.318174 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:46.318184 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:46.318194 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:46.318205 | orchestrator | 2025-09-04 00:24:46.318216 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-04 00:24:46.318227 | orchestrator | Thursday 04 September 2025 00:24:45 +0000 (0:00:00.183) 0:00:12.142 **** 2025-09-04 00:24:46.318237 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:46.318248 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:46.318258 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:46.318269 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:46.318279 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:46.318323 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:46.318333 | orchestrator | 2025-09-04 00:24:46.318344 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-04 00:24:46.318361 | orchestrator | Thursday 04 September 2025 00:24:46 +0000 (0:00:00.191) 0:00:12.333 **** 2025-09-04 00:24:46.318373 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:46.318384 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:46.318394 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:46.318405 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:46.318427 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:47.373364 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:47.373445 | orchestrator | 2025-09-04 00:24:47.373454 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-04 00:24:47.373463 | orchestrator | Thursday 04 September 2025 00:24:46 +0000 (0:00:00.193) 0:00:12.527 **** 2025-09-04 00:24:47.373470 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:24:47.373476 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:24:47.373483 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:24:47.373489 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:24:47.373495 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:24:47.373502 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:24:47.373508 | orchestrator | 2025-09-04 00:24:47.373514 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-04 00:24:47.373521 | orchestrator | Thursday 04 September 2025 00:24:46 +0000 (0:00:00.625) 0:00:13.153 **** 2025-09-04 00:24:47.373527 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:24:47.373533 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:24:47.373559 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:24:47.373565 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:24:47.373571 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:24:47.373577 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:24:47.373583 | orchestrator | 2025-09-04 00:24:47.373590 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:24:47.373597 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373604 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373610 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373617 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373623 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373629 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:24:47.373635 | orchestrator | 2025-09-04 00:24:47.373641 | orchestrator | 2025-09-04 00:24:47.373648 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:24:47.373654 | orchestrator | Thursday 04 September 2025 00:24:47 +0000 (0:00:00.215) 0:00:13.369 **** 2025-09-04 00:24:47.373660 | orchestrator | =============================================================================== 2025-09-04 00:24:47.373666 | orchestrator | Gathering Facts --------------------------------------------------------- 3.66s 2025-09-04 00:24:47.373672 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-09-04 00:24:47.373678 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-09-04 00:24:47.373686 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-09-04 00:24:47.373692 | orchestrator | Do not require tty for all users ---------------------------------------- 0.93s 2025-09-04 00:24:47.373698 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-09-04 00:24:47.373704 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-09-04 00:24:47.373710 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-09-04 00:24:47.373716 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-09-04 00:24:47.373722 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-09-04 00:24:47.373728 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-04 00:24:47.373734 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-09-04 00:24:47.373741 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2025-09-04 00:24:47.373747 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-09-04 00:24:47.373753 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-09-04 00:24:47.373759 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-04 00:24:47.373765 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-09-04 00:24:47.373771 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-04 00:24:47.755770 | orchestrator | + osism apply --environment custom facts 2025-09-04 00:24:49.793422 | orchestrator | 2025-09-04 00:24:49 | INFO  | Trying to run play facts in environment custom 2025-09-04 00:24:59.900568 | orchestrator | 2025-09-04 00:24:59 | INFO  | Task 78d97be0-e502-4239-9ba4-cbd53328e4a6 (facts) was prepared for execution. 2025-09-04 00:24:59.900678 | orchestrator | 2025-09-04 00:24:59 | INFO  | It takes a moment until task 78d97be0-e502-4239-9ba4-cbd53328e4a6 (facts) has been started and output is visible here. 2025-09-04 00:25:43.205396 | orchestrator | 2025-09-04 00:25:43.205505 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-04 00:25:43.205519 | orchestrator | 2025-09-04 00:25:43.205529 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-04 00:25:43.205540 | orchestrator | Thursday 04 September 2025 00:25:03 +0000 (0:00:00.090) 0:00:00.090 **** 2025-09-04 00:25:43.205549 | orchestrator | ok: [testbed-manager] 2025-09-04 00:25:43.205559 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.205569 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:25:43.205579 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.205589 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.205600 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:25:43.205611 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:25:43.205622 | orchestrator | 2025-09-04 00:25:43.205633 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-04 00:25:43.205644 | orchestrator | Thursday 04 September 2025 00:25:05 +0000 (0:00:01.415) 0:00:01.505 **** 2025-09-04 00:25:43.205655 | orchestrator | ok: [testbed-manager] 2025-09-04 00:25:43.205666 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.205677 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.205688 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:25:43.205699 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:25:43.205709 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.205720 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:25:43.205731 | orchestrator | 2025-09-04 00:25:43.205742 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-04 00:25:43.205753 | orchestrator | 2025-09-04 00:25:43.205764 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-04 00:25:43.205775 | orchestrator | Thursday 04 September 2025 00:25:06 +0000 (0:00:01.207) 0:00:02.713 **** 2025-09-04 00:25:43.205786 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.205797 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.205807 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.205818 | orchestrator | 2025-09-04 00:25:43.205829 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-04 00:25:43.205841 | orchestrator | Thursday 04 September 2025 00:25:06 +0000 (0:00:00.123) 0:00:02.837 **** 2025-09-04 00:25:43.205851 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.205862 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.205873 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.205884 | orchestrator | 2025-09-04 00:25:43.205895 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-04 00:25:43.205906 | orchestrator | Thursday 04 September 2025 00:25:06 +0000 (0:00:00.198) 0:00:03.035 **** 2025-09-04 00:25:43.205917 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.205928 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.205939 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.205950 | orchestrator | 2025-09-04 00:25:43.205961 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-04 00:25:43.205972 | orchestrator | Thursday 04 September 2025 00:25:06 +0000 (0:00:00.193) 0:00:03.229 **** 2025-09-04 00:25:43.205984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:25:43.205997 | orchestrator | 2025-09-04 00:25:43.206008 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-04 00:25:43.206079 | orchestrator | Thursday 04 September 2025 00:25:07 +0000 (0:00:00.155) 0:00:03.385 **** 2025-09-04 00:25:43.206124 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.206136 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.206147 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.206157 | orchestrator | 2025-09-04 00:25:43.206168 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-04 00:25:43.206179 | orchestrator | Thursday 04 September 2025 00:25:07 +0000 (0:00:00.450) 0:00:03.835 **** 2025-09-04 00:25:43.206190 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:25:43.206200 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:25:43.206211 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:25:43.206221 | orchestrator | 2025-09-04 00:25:43.206232 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-04 00:25:43.206243 | orchestrator | Thursday 04 September 2025 00:25:07 +0000 (0:00:00.104) 0:00:03.939 **** 2025-09-04 00:25:43.206253 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.206264 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.206274 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.206308 | orchestrator | 2025-09-04 00:25:43.206319 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-04 00:25:43.206330 | orchestrator | Thursday 04 September 2025 00:25:08 +0000 (0:00:01.049) 0:00:04.989 **** 2025-09-04 00:25:43.206359 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.206370 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.206381 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.206391 | orchestrator | 2025-09-04 00:25:43.206403 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-04 00:25:43.206414 | orchestrator | Thursday 04 September 2025 00:25:09 +0000 (0:00:00.478) 0:00:05.467 **** 2025-09-04 00:25:43.206424 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.206435 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.206446 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.206456 | orchestrator | 2025-09-04 00:25:43.206467 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-04 00:25:43.206478 | orchestrator | Thursday 04 September 2025 00:25:10 +0000 (0:00:01.099) 0:00:06.566 **** 2025-09-04 00:25:43.206502 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.206524 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.206535 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.206545 | orchestrator | 2025-09-04 00:25:43.206556 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-04 00:25:43.206567 | orchestrator | Thursday 04 September 2025 00:25:27 +0000 (0:00:16.873) 0:00:23.440 **** 2025-09-04 00:25:43.206577 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:25:43.206593 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:25:43.206604 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:25:43.206615 | orchestrator | 2025-09-04 00:25:43.206626 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-04 00:25:43.206657 | orchestrator | Thursday 04 September 2025 00:25:27 +0000 (0:00:00.118) 0:00:23.558 **** 2025-09-04 00:25:43.206668 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:25:43.206679 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:25:43.206690 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:25:43.206700 | orchestrator | 2025-09-04 00:25:43.206711 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-04 00:25:43.206722 | orchestrator | Thursday 04 September 2025 00:25:34 +0000 (0:00:06.933) 0:00:30.492 **** 2025-09-04 00:25:43.206732 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.206743 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.206754 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.206764 | orchestrator | 2025-09-04 00:25:43.206775 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-04 00:25:43.206785 | orchestrator | Thursday 04 September 2025 00:25:34 +0000 (0:00:00.458) 0:00:30.951 **** 2025-09-04 00:25:43.206796 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-04 00:25:43.206815 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-04 00:25:43.206826 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-04 00:25:43.206837 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-04 00:25:43.206848 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-04 00:25:43.206858 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-04 00:25:43.206869 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-04 00:25:43.206879 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-04 00:25:43.206890 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-04 00:25:43.206900 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-04 00:25:43.206911 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-04 00:25:43.206921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-04 00:25:43.206932 | orchestrator | 2025-09-04 00:25:43.206942 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-04 00:25:43.206953 | orchestrator | Thursday 04 September 2025 00:25:38 +0000 (0:00:03.374) 0:00:34.326 **** 2025-09-04 00:25:43.206963 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.206974 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.206985 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.206996 | orchestrator | 2025-09-04 00:25:43.207007 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-04 00:25:43.207017 | orchestrator | 2025-09-04 00:25:43.207028 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:25:43.207039 | orchestrator | Thursday 04 September 2025 00:25:39 +0000 (0:00:01.212) 0:00:35.538 **** 2025-09-04 00:25:43.207050 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:25:43.207061 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:25:43.207071 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:25:43.207082 | orchestrator | ok: [testbed-manager] 2025-09-04 00:25:43.207093 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:25:43.207103 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:25:43.207114 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:25:43.207124 | orchestrator | 2025-09-04 00:25:43.207135 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:25:43.207147 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:25:43.207158 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:25:43.207170 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:25:43.207181 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:25:43.207192 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:25:43.207203 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:25:43.207214 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:25:43.207225 | orchestrator | 2025-09-04 00:25:43.207235 | orchestrator | 2025-09-04 00:25:43.207246 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:25:43.207257 | orchestrator | Thursday 04 September 2025 00:25:43 +0000 (0:00:03.864) 0:00:39.403 **** 2025-09-04 00:25:43.207275 | orchestrator | =============================================================================== 2025-09-04 00:25:43.207303 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.87s 2025-09-04 00:25:43.207314 | orchestrator | Install required packages (Debian) -------------------------------------- 6.93s 2025-09-04 00:25:43.207325 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.86s 2025-09-04 00:25:43.207335 | orchestrator | Copy fact files --------------------------------------------------------- 3.37s 2025-09-04 00:25:43.207351 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2025-09-04 00:25:43.207362 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.21s 2025-09-04 00:25:43.207380 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2025-09-04 00:25:43.425265 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-09-04 00:25:43.425414 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-09-04 00:25:43.425430 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-09-04 00:25:43.425441 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-09-04 00:25:43.425452 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-09-04 00:25:43.425463 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-09-04 00:25:43.425474 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-04 00:25:43.425485 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-04 00:25:43.425497 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-04 00:25:43.425508 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-09-04 00:25:43.425518 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-04 00:25:43.697803 | orchestrator | + osism apply bootstrap 2025-09-04 00:25:55.782631 | orchestrator | 2025-09-04 00:25:55 | INFO  | Task 78a337e0-feda-46bc-adc2-995843e7398c (bootstrap) was prepared for execution. 2025-09-04 00:25:55.782746 | orchestrator | 2025-09-04 00:25:55 | INFO  | It takes a moment until task 78a337e0-feda-46bc-adc2-995843e7398c (bootstrap) has been started and output is visible here. 2025-09-04 00:26:11.661548 | orchestrator | 2025-09-04 00:26:11.661667 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-04 00:26:11.661685 | orchestrator | 2025-09-04 00:26:11.661697 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-04 00:26:11.661709 | orchestrator | Thursday 04 September 2025 00:25:59 +0000 (0:00:00.166) 0:00:00.166 **** 2025-09-04 00:26:11.661720 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:11.661733 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:11.661744 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:11.661755 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:11.661766 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:11.661777 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:11.661787 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:11.661798 | orchestrator | 2025-09-04 00:26:11.661809 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-04 00:26:11.661820 | orchestrator | 2025-09-04 00:26:11.661831 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:26:11.661842 | orchestrator | Thursday 04 September 2025 00:26:00 +0000 (0:00:00.235) 0:00:00.401 **** 2025-09-04 00:26:11.661853 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:11.661864 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:11.661875 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:11.661886 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:11.661896 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:11.661907 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:11.661945 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:11.661956 | orchestrator | 2025-09-04 00:26:11.661967 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-04 00:26:11.661978 | orchestrator | 2025-09-04 00:26:11.661989 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:26:11.661999 | orchestrator | Thursday 04 September 2025 00:26:03 +0000 (0:00:03.709) 0:00:04.110 **** 2025-09-04 00:26:11.662011 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-04 00:26:11.662078 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-04 00:26:11.662113 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-04 00:26:11.662156 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-04 00:26:11.662169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:26:11.662182 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-04 00:26:11.662195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:26:11.662207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-04 00:26:11.662220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:26:11.662233 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-04 00:26:11.662245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-04 00:26:11.662258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-04 00:26:11.662271 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-04 00:26:11.662306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-04 00:26:11.662320 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-04 00:26:11.662332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-04 00:26:11.662344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-04 00:26:11.662357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-04 00:26:11.662370 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:11.662383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-04 00:26:11.662395 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-04 00:26:11.662409 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:11.662422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-04 00:26:11.662434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-04 00:26:11.662446 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-04 00:26:11.662459 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-04 00:26:11.662487 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-04 00:26:11.662499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-04 00:26:11.662510 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-04 00:26:11.662521 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-04 00:26:11.662532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-04 00:26:11.662542 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-04 00:26:11.662553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-04 00:26:11.662564 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-04 00:26:11.662574 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-04 00:26:11.662585 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-04 00:26:11.662596 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-04 00:26:11.662606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-04 00:26:11.662617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-04 00:26:11.662628 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-04 00:26:11.662647 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-04 00:26:11.662659 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-04 00:26:11.662670 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:11.662681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-04 00:26:11.662692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:26:11.662703 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-04 00:26:11.662732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-04 00:26:11.662743 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:11.662754 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-04 00:26:11.662765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:26:11.662776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-04 00:26:11.662787 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:11.662797 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-04 00:26:11.662808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:26:11.662819 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:11.662830 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:11.662841 | orchestrator | 2025-09-04 00:26:11.662852 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-04 00:26:11.662863 | orchestrator | 2025-09-04 00:26:11.662874 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-04 00:26:11.662885 | orchestrator | Thursday 04 September 2025 00:26:04 +0000 (0:00:00.410) 0:00:04.521 **** 2025-09-04 00:26:11.662896 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:11.662907 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:11.662918 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:11.662928 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:11.662939 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:11.662950 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:11.662961 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:11.662971 | orchestrator | 2025-09-04 00:26:11.662982 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-04 00:26:11.662993 | orchestrator | Thursday 04 September 2025 00:26:05 +0000 (0:00:01.223) 0:00:05.744 **** 2025-09-04 00:26:11.663004 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:11.663015 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:11.663025 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:11.663036 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:11.663047 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:11.663057 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:11.663068 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:11.663079 | orchestrator | 2025-09-04 00:26:11.663090 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-04 00:26:11.663100 | orchestrator | Thursday 04 September 2025 00:26:06 +0000 (0:00:01.253) 0:00:06.998 **** 2025-09-04 00:26:11.663112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:11.663126 | orchestrator | 2025-09-04 00:26:11.663137 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-04 00:26:11.663148 | orchestrator | Thursday 04 September 2025 00:26:07 +0000 (0:00:00.281) 0:00:07.279 **** 2025-09-04 00:26:11.663159 | orchestrator | changed: [testbed-manager] 2025-09-04 00:26:11.663170 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:11.663180 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:11.663191 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:11.663202 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:11.663213 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:11.663224 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:11.663241 | orchestrator | 2025-09-04 00:26:11.663252 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-04 00:26:11.663263 | orchestrator | Thursday 04 September 2025 00:26:09 +0000 (0:00:02.073) 0:00:09.352 **** 2025-09-04 00:26:11.663274 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:11.663306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:11.663320 | orchestrator | 2025-09-04 00:26:11.663336 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-04 00:26:11.663347 | orchestrator | Thursday 04 September 2025 00:26:09 +0000 (0:00:00.292) 0:00:09.645 **** 2025-09-04 00:26:11.663358 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:11.663383 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:11.663394 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:11.663405 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:11.663415 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:11.663426 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:11.663437 | orchestrator | 2025-09-04 00:26:11.663448 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-04 00:26:11.663459 | orchestrator | Thursday 04 September 2025 00:26:10 +0000 (0:00:01.034) 0:00:10.680 **** 2025-09-04 00:26:11.663469 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:11.663480 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:11.663491 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:11.663501 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:11.663512 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:11.663523 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:11.663533 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:11.663544 | orchestrator | 2025-09-04 00:26:11.663555 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-04 00:26:11.663566 | orchestrator | Thursday 04 September 2025 00:26:11 +0000 (0:00:00.610) 0:00:11.290 **** 2025-09-04 00:26:11.663577 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:11.663587 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:11.663598 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:11.663609 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:11.663619 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:11.663630 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:11.663641 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:11.663651 | orchestrator | 2025-09-04 00:26:11.663662 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-04 00:26:11.663674 | orchestrator | Thursday 04 September 2025 00:26:11 +0000 (0:00:00.460) 0:00:11.750 **** 2025-09-04 00:26:11.663685 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:11.663696 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:11.663713 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:23.935012 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:23.935136 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:23.935152 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:23.935164 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:23.935175 | orchestrator | 2025-09-04 00:26:23.935188 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-04 00:26:23.935201 | orchestrator | Thursday 04 September 2025 00:26:11 +0000 (0:00:00.220) 0:00:11.971 **** 2025-09-04 00:26:23.935215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:23.935244 | orchestrator | 2025-09-04 00:26:23.935256 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-04 00:26:23.935268 | orchestrator | Thursday 04 September 2025 00:26:12 +0000 (0:00:00.278) 0:00:12.249 **** 2025-09-04 00:26:23.935354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:23.935366 | orchestrator | 2025-09-04 00:26:23.935377 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-04 00:26:23.935388 | orchestrator | Thursday 04 September 2025 00:26:12 +0000 (0:00:00.306) 0:00:12.555 **** 2025-09-04 00:26:23.935399 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.935411 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.935422 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.935432 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.935443 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.935453 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.935464 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.935474 | orchestrator | 2025-09-04 00:26:23.935485 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-04 00:26:23.935496 | orchestrator | Thursday 04 September 2025 00:26:13 +0000 (0:00:01.330) 0:00:13.886 **** 2025-09-04 00:26:23.935507 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:23.935518 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:23.935528 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:23.935541 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:23.935553 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:23.935565 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:23.935578 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:23.935591 | orchestrator | 2025-09-04 00:26:23.935604 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-04 00:26:23.935617 | orchestrator | Thursday 04 September 2025 00:26:13 +0000 (0:00:00.228) 0:00:14.114 **** 2025-09-04 00:26:23.935630 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.935642 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.935654 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.935668 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.935681 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.935693 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.935705 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.935718 | orchestrator | 2025-09-04 00:26:23.935731 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-04 00:26:23.935744 | orchestrator | Thursday 04 September 2025 00:26:14 +0000 (0:00:00.541) 0:00:14.656 **** 2025-09-04 00:26:23.935756 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:23.935769 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:23.935782 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:23.935794 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:23.935806 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:23.935818 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:23.935831 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:23.935843 | orchestrator | 2025-09-04 00:26:23.935856 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-04 00:26:23.935870 | orchestrator | Thursday 04 September 2025 00:26:14 +0000 (0:00:00.261) 0:00:14.918 **** 2025-09-04 00:26:23.935883 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.935894 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:23.935905 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:23.935916 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:23.935926 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:23.935937 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:23.935947 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:23.935958 | orchestrator | 2025-09-04 00:26:23.935969 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-04 00:26:23.935979 | orchestrator | Thursday 04 September 2025 00:26:15 +0000 (0:00:00.579) 0:00:15.497 **** 2025-09-04 00:26:23.935998 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936009 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:23.936019 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:23.936030 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:23.936040 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:23.936051 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:23.936061 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:23.936072 | orchestrator | 2025-09-04 00:26:23.936083 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-04 00:26:23.936093 | orchestrator | Thursday 04 September 2025 00:26:16 +0000 (0:00:01.154) 0:00:16.651 **** 2025-09-04 00:26:23.936104 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936115 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.936125 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.936136 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.936147 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.936158 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.936168 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.936179 | orchestrator | 2025-09-04 00:26:23.936190 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-04 00:26:23.936200 | orchestrator | Thursday 04 September 2025 00:26:17 +0000 (0:00:01.241) 0:00:17.893 **** 2025-09-04 00:26:23.936229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:23.936241 | orchestrator | 2025-09-04 00:26:23.936252 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-04 00:26:23.936262 | orchestrator | Thursday 04 September 2025 00:26:18 +0000 (0:00:00.437) 0:00:18.330 **** 2025-09-04 00:26:23.936299 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:23.936311 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:23.936322 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:23.936332 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:26:23.936343 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:26:23.936354 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:23.936364 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:26:23.936375 | orchestrator | 2025-09-04 00:26:23.936385 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-04 00:26:23.936396 | orchestrator | Thursday 04 September 2025 00:26:19 +0000 (0:00:01.389) 0:00:19.719 **** 2025-09-04 00:26:23.936407 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936418 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.936428 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.936439 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.936493 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.936505 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.936515 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.936526 | orchestrator | 2025-09-04 00:26:23.936537 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-04 00:26:23.936548 | orchestrator | Thursday 04 September 2025 00:26:19 +0000 (0:00:00.219) 0:00:19.939 **** 2025-09-04 00:26:23.936558 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936569 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.936580 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.936590 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.936601 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.936611 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.936622 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.936632 | orchestrator | 2025-09-04 00:26:23.936643 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-04 00:26:23.936654 | orchestrator | Thursday 04 September 2025 00:26:19 +0000 (0:00:00.217) 0:00:20.156 **** 2025-09-04 00:26:23.936665 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936683 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.936693 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.936704 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.936715 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.936725 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.936736 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.936747 | orchestrator | 2025-09-04 00:26:23.936757 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-04 00:26:23.936768 | orchestrator | Thursday 04 September 2025 00:26:20 +0000 (0:00:00.231) 0:00:20.388 **** 2025-09-04 00:26:23.936780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:26:23.936793 | orchestrator | 2025-09-04 00:26:23.936804 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-04 00:26:23.936814 | orchestrator | Thursday 04 September 2025 00:26:20 +0000 (0:00:00.303) 0:00:20.691 **** 2025-09-04 00:26:23.936825 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.936836 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.936847 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.936857 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.936868 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.936878 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.936893 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.936911 | orchestrator | 2025-09-04 00:26:23.936936 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-04 00:26:23.936953 | orchestrator | Thursday 04 September 2025 00:26:20 +0000 (0:00:00.524) 0:00:21.216 **** 2025-09-04 00:26:23.936969 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:26:23.936986 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:26:23.937004 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:26:23.937024 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:26:23.937042 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:26:23.937058 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:26:23.937069 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:26:23.937080 | orchestrator | 2025-09-04 00:26:23.937091 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-04 00:26:23.937102 | orchestrator | Thursday 04 September 2025 00:26:21 +0000 (0:00:00.218) 0:00:21.434 **** 2025-09-04 00:26:23.937112 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.937123 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:23.937134 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:26:23.937144 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:23.937155 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.937166 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.937176 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.937187 | orchestrator | 2025-09-04 00:26:23.937198 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-04 00:26:23.937209 | orchestrator | Thursday 04 September 2025 00:26:22 +0000 (0:00:01.036) 0:00:22.470 **** 2025-09-04 00:26:23.937220 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.937231 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:26:23.937241 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:26:23.937252 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:26:23.937263 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.937297 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:26:23.937309 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:26:23.937320 | orchestrator | 2025-09-04 00:26:23.937331 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-04 00:26:23.937342 | orchestrator | Thursday 04 September 2025 00:26:22 +0000 (0:00:00.606) 0:00:23.077 **** 2025-09-04 00:26:23.937353 | orchestrator | ok: [testbed-manager] 2025-09-04 00:26:23.937364 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:26:23.937374 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:26:23.937394 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:26:23.937415 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923140 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923292 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.923312 | orchestrator | 2025-09-04 00:27:06.923325 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-04 00:27:06.923338 | orchestrator | Thursday 04 September 2025 00:26:23 +0000 (0:00:01.064) 0:00:24.142 **** 2025-09-04 00:27:06.923349 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923360 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923371 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.923383 | orchestrator | changed: [testbed-manager] 2025-09-04 00:27:06.923394 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:27:06.923405 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:27:06.923416 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.923427 | orchestrator | 2025-09-04 00:27:06.923439 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-04 00:27:06.923450 | orchestrator | Thursday 04 September 2025 00:26:41 +0000 (0:00:17.877) 0:00:42.019 **** 2025-09-04 00:27:06.923460 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.923471 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.923482 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.923493 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.923504 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.923514 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923525 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923536 | orchestrator | 2025-09-04 00:27:06.923547 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-04 00:27:06.923558 | orchestrator | Thursday 04 September 2025 00:26:42 +0000 (0:00:00.222) 0:00:42.242 **** 2025-09-04 00:27:06.923569 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.923580 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.923590 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.923601 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.923612 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.923622 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923633 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923644 | orchestrator | 2025-09-04 00:27:06.923655 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-04 00:27:06.923666 | orchestrator | Thursday 04 September 2025 00:26:42 +0000 (0:00:00.233) 0:00:42.475 **** 2025-09-04 00:27:06.923677 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.923688 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.923699 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.923710 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.923721 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.923731 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923742 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923753 | orchestrator | 2025-09-04 00:27:06.923764 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-04 00:27:06.923775 | orchestrator | Thursday 04 September 2025 00:26:42 +0000 (0:00:00.227) 0:00:42.702 **** 2025-09-04 00:27:06.923789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:27:06.923803 | orchestrator | 2025-09-04 00:27:06.923814 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-04 00:27:06.923825 | orchestrator | Thursday 04 September 2025 00:26:42 +0000 (0:00:00.306) 0:00:43.009 **** 2025-09-04 00:27:06.923836 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.923846 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.923857 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.923867 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.923878 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.923889 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.923926 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.923937 | orchestrator | 2025-09-04 00:27:06.923948 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-04 00:27:06.923975 | orchestrator | Thursday 04 September 2025 00:26:44 +0000 (0:00:01.778) 0:00:44.787 **** 2025-09-04 00:27:06.923986 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:27:06.923997 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:27:06.924007 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.924018 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:27:06.924029 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:27:06.924039 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:27:06.924050 | orchestrator | changed: [testbed-manager] 2025-09-04 00:27:06.924060 | orchestrator | 2025-09-04 00:27:06.924071 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-04 00:27:06.924083 | orchestrator | Thursday 04 September 2025 00:26:46 +0000 (0:00:01.937) 0:00:46.725 **** 2025-09-04 00:27:06.924093 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.924104 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.924115 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.924125 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.924136 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.924146 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.924157 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.924167 | orchestrator | 2025-09-04 00:27:06.924178 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-04 00:27:06.924188 | orchestrator | Thursday 04 September 2025 00:26:47 +0000 (0:00:00.836) 0:00:47.562 **** 2025-09-04 00:27:06.924200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:27:06.924213 | orchestrator | 2025-09-04 00:27:06.924224 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-04 00:27:06.924235 | orchestrator | Thursday 04 September 2025 00:26:47 +0000 (0:00:00.302) 0:00:47.864 **** 2025-09-04 00:27:06.924262 | orchestrator | changed: [testbed-manager] 2025-09-04 00:27:06.924273 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:27:06.924284 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:27:06.924295 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.924306 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:27:06.924317 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:27:06.924327 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:27:06.924338 | orchestrator | 2025-09-04 00:27:06.924367 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-04 00:27:06.924379 | orchestrator | Thursday 04 September 2025 00:26:48 +0000 (0:00:01.107) 0:00:48.972 **** 2025-09-04 00:27:06.924389 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:27:06.924400 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:27:06.924411 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:27:06.924422 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:27:06.924432 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:27:06.924443 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:27:06.924454 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:27:06.924465 | orchestrator | 2025-09-04 00:27:06.924476 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-04 00:27:06.924486 | orchestrator | Thursday 04 September 2025 00:26:49 +0000 (0:00:00.341) 0:00:49.313 **** 2025-09-04 00:27:06.924497 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:27:06.924508 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:27:06.924519 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:27:06.924529 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:27:06.924540 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:27:06.924551 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.924561 | orchestrator | changed: [testbed-manager] 2025-09-04 00:27:06.924584 | orchestrator | 2025-09-04 00:27:06.924595 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-04 00:27:06.924606 | orchestrator | Thursday 04 September 2025 00:27:01 +0000 (0:00:12.042) 0:01:01.356 **** 2025-09-04 00:27:06.924616 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.924627 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.924637 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.924648 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.924659 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.924669 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.924680 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.924690 | orchestrator | 2025-09-04 00:27:06.924701 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-04 00:27:06.924712 | orchestrator | Thursday 04 September 2025 00:27:02 +0000 (0:00:01.537) 0:01:02.894 **** 2025-09-04 00:27:06.924723 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.924733 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.924744 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.924755 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.924765 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.924776 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.924787 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.924797 | orchestrator | 2025-09-04 00:27:06.924808 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-04 00:27:06.924818 | orchestrator | Thursday 04 September 2025 00:27:03 +0000 (0:00:00.899) 0:01:03.794 **** 2025-09-04 00:27:06.924829 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.924840 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.924850 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.924861 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.924871 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.924882 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.924892 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.924903 | orchestrator | 2025-09-04 00:27:06.924914 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-04 00:27:06.924925 | orchestrator | Thursday 04 September 2025 00:27:03 +0000 (0:00:00.244) 0:01:04.038 **** 2025-09-04 00:27:06.924935 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.924946 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.924956 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.924966 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.924977 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.924987 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.924998 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.925008 | orchestrator | 2025-09-04 00:27:06.925019 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-04 00:27:06.925030 | orchestrator | Thursday 04 September 2025 00:27:04 +0000 (0:00:00.255) 0:01:04.293 **** 2025-09-04 00:27:06.925041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:27:06.925053 | orchestrator | 2025-09-04 00:27:06.925063 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-04 00:27:06.925074 | orchestrator | Thursday 04 September 2025 00:27:04 +0000 (0:00:00.323) 0:01:04.617 **** 2025-09-04 00:27:06.925085 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.925095 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.925106 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.925116 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.925127 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.925138 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.925148 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.925158 | orchestrator | 2025-09-04 00:27:06.925169 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-04 00:27:06.925187 | orchestrator | Thursday 04 September 2025 00:27:06 +0000 (0:00:01.692) 0:01:06.309 **** 2025-09-04 00:27:06.925197 | orchestrator | changed: [testbed-manager] 2025-09-04 00:27:06.925208 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:27:06.925219 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:27:06.925230 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:27:06.925241 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:27:06.925278 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:27:06.925289 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:27:06.925300 | orchestrator | 2025-09-04 00:27:06.925311 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-04 00:27:06.925322 | orchestrator | Thursday 04 September 2025 00:27:06 +0000 (0:00:00.585) 0:01:06.894 **** 2025-09-04 00:27:06.925332 | orchestrator | ok: [testbed-manager] 2025-09-04 00:27:06.925343 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:27:06.925354 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:27:06.925364 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:27:06.925375 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:27:06.925386 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:27:06.925396 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:27:06.925407 | orchestrator | 2025-09-04 00:27:06.925425 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-04 00:29:30.489556 | orchestrator | Thursday 04 September 2025 00:27:06 +0000 (0:00:00.237) 0:01:07.132 **** 2025-09-04 00:29:30.489666 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:30.489681 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:30.489691 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:30.489701 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:30.489710 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:30.489720 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:30.489747 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:30.489758 | orchestrator | 2025-09-04 00:29:30.489769 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-04 00:29:30.489779 | orchestrator | Thursday 04 September 2025 00:27:08 +0000 (0:00:01.157) 0:01:08.289 **** 2025-09-04 00:29:30.489789 | orchestrator | changed: [testbed-manager] 2025-09-04 00:29:30.489800 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:29:30.489810 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:29:30.489819 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:29:30.489829 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:29:30.489838 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:29:30.489847 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:29:30.489857 | orchestrator | 2025-09-04 00:29:30.489868 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-04 00:29:30.489877 | orchestrator | Thursday 04 September 2025 00:27:09 +0000 (0:00:01.834) 0:01:10.124 **** 2025-09-04 00:29:30.489887 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:30.489897 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:30.489906 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:30.489916 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:30.489925 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:30.489935 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:30.489944 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:30.489954 | orchestrator | 2025-09-04 00:29:30.489964 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-04 00:29:30.489974 | orchestrator | Thursday 04 September 2025 00:27:12 +0000 (0:00:02.548) 0:01:12.673 **** 2025-09-04 00:29:30.489984 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:30.489993 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:30.490003 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:30.490069 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:30.490082 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:30.490093 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:30.490105 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:30.490116 | orchestrator | 2025-09-04 00:29:30.490128 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-04 00:29:30.490185 | orchestrator | Thursday 04 September 2025 00:27:52 +0000 (0:00:39.908) 0:01:52.581 **** 2025-09-04 00:29:30.490198 | orchestrator | changed: [testbed-manager] 2025-09-04 00:29:30.490209 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:29:30.490220 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:29:30.490231 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:29:30.490242 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:29:30.490254 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:29:30.490266 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:29:30.490277 | orchestrator | 2025-09-04 00:29:30.490289 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-04 00:29:30.490301 | orchestrator | Thursday 04 September 2025 00:29:10 +0000 (0:01:18.610) 0:03:11.191 **** 2025-09-04 00:29:30.490312 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:30.490324 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:30.490335 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:30.490347 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:30.490357 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:30.490369 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:30.490380 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:30.490391 | orchestrator | 2025-09-04 00:29:30.490403 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-04 00:29:30.490415 | orchestrator | Thursday 04 September 2025 00:29:12 +0000 (0:00:01.751) 0:03:12.943 **** 2025-09-04 00:29:30.490426 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:30.490436 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:30.490451 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:30.490461 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:30.490470 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:30.490479 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:30.490489 | orchestrator | changed: [testbed-manager] 2025-09-04 00:29:30.490498 | orchestrator | 2025-09-04 00:29:30.490508 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-04 00:29:30.490517 | orchestrator | Thursday 04 September 2025 00:29:25 +0000 (0:00:12.315) 0:03:25.259 **** 2025-09-04 00:29:30.490535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-04 00:29:30.490555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-04 00:29:30.490589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-04 00:29:30.490601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-04 00:29:30.490619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-04 00:29:30.490629 | orchestrator | 2025-09-04 00:29:30.490639 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-04 00:29:30.490649 | orchestrator | Thursday 04 September 2025 00:29:25 +0000 (0:00:00.411) 0:03:25.671 **** 2025-09-04 00:29:30.490658 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-04 00:29:30.490668 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:30.490678 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-04 00:29:30.490688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-04 00:29:30.490697 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:29:30.490707 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:29:30.490716 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-04 00:29:30.490726 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:29:30.490735 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:29:30.490745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:29:30.490754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:29:30.490764 | orchestrator | 2025-09-04 00:29:30.490773 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-04 00:29:30.490783 | orchestrator | Thursday 04 September 2025 00:29:26 +0000 (0:00:00.601) 0:03:26.273 **** 2025-09-04 00:29:30.490792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-04 00:29:30.490802 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-04 00:29:30.490812 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-04 00:29:30.490821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-04 00:29:30.490831 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-04 00:29:30.490845 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-04 00:29:30.490854 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-04 00:29:30.490864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-04 00:29:30.490873 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-04 00:29:30.490883 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-04 00:29:30.490893 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:30.490902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-04 00:29:30.490912 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-04 00:29:30.490922 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-04 00:29:30.490931 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-04 00:29:30.490941 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-04 00:29:30.490951 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-04 00:29:30.490966 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-04 00:29:30.490975 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-04 00:29:30.490985 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-04 00:29:30.490995 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-04 00:29:30.491010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-04 00:29:32.638549 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-04 00:29:32.638659 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-04 00:29:32.638674 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-04 00:29:32.638687 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-04 00:29:32.638698 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-04 00:29:32.638709 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-04 00:29:32.638720 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-04 00:29:32.638730 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-04 00:29:32.638741 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-04 00:29:32.638752 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-04 00:29:32.638763 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:29:32.638775 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:29:32.638785 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-04 00:29:32.638796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-04 00:29:32.638807 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-04 00:29:32.638817 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-04 00:29:32.638828 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-04 00:29:32.638839 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-04 00:29:32.638850 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-04 00:29:32.638861 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-04 00:29:32.638872 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-04 00:29:32.638883 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:29:32.638893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-04 00:29:32.638904 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-04 00:29:32.638915 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-04 00:29:32.638925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-04 00:29:32.638936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-04 00:29:32.638971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-04 00:29:32.639006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-04 00:29:32.639017 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-04 00:29:32.639028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639039 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639050 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-04 00:29:32.639082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-04 00:29:32.639092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-04 00:29:32.639103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-04 00:29:32.639125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-04 00:29:32.639135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-04 00:29:32.639146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-04 00:29:32.639157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-04 00:29:32.639228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-04 00:29:32.639241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-04 00:29:32.639252 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-04 00:29:32.639263 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-04 00:29:32.639274 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-04 00:29:32.639285 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-04 00:29:32.639296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-04 00:29:32.639306 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-04 00:29:32.639317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-04 00:29:32.639328 | orchestrator | 2025-09-04 00:29:32.639339 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-04 00:29:32.639350 | orchestrator | Thursday 04 September 2025 00:29:30 +0000 (0:00:04.424) 0:03:30.698 **** 2025-09-04 00:29:32.639361 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639389 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639433 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639443 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-04 00:29:32.639463 | orchestrator | 2025-09-04 00:29:32.639474 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-04 00:29:32.639485 | orchestrator | Thursday 04 September 2025 00:29:31 +0000 (0:00:00.590) 0:03:31.288 **** 2025-09-04 00:29:32.639496 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-04 00:29:32.639507 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:32.639518 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-04 00:29:32.639529 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-04 00:29:32.639539 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:29:32.639550 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:29:32.639561 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-04 00:29:32.639572 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:29:32.639583 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-04 00:29:32.639598 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-04 00:29:32.639609 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-04 00:29:32.639620 | orchestrator | 2025-09-04 00:29:32.639631 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-04 00:29:32.639642 | orchestrator | Thursday 04 September 2025 00:29:31 +0000 (0:00:00.616) 0:03:31.905 **** 2025-09-04 00:29:32.639653 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-04 00:29:32.639663 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-04 00:29:32.639674 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:32.639685 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:29:32.639695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-04 00:29:32.639706 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-04 00:29:32.639717 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:29:32.639728 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:29:32.639738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-04 00:29:32.639749 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-04 00:29:32.639760 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-04 00:29:32.639770 | orchestrator | 2025-09-04 00:29:32.639781 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-04 00:29:32.639792 | orchestrator | Thursday 04 September 2025 00:29:32 +0000 (0:00:00.675) 0:03:32.580 **** 2025-09-04 00:29:32.639803 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:32.639814 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:29:32.639824 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:29:32.639835 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:29:32.639846 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:29:32.639863 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:29:44.195775 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:29:44.195896 | orchestrator | 2025-09-04 00:29:44.195913 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-04 00:29:44.195926 | orchestrator | Thursday 04 September 2025 00:29:32 +0000 (0:00:00.273) 0:03:32.853 **** 2025-09-04 00:29:44.195938 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:44.195951 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:44.195962 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:44.196000 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:44.196012 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:44.196023 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:44.196034 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:44.196045 | orchestrator | 2025-09-04 00:29:44.196056 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-04 00:29:44.196067 | orchestrator | Thursday 04 September 2025 00:29:38 +0000 (0:00:05.792) 0:03:38.646 **** 2025-09-04 00:29:44.196077 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-04 00:29:44.196089 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-04 00:29:44.196100 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:29:44.196111 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-04 00:29:44.196121 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:29:44.196132 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:29:44.196143 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-04 00:29:44.196154 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-04 00:29:44.196219 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:29:44.196230 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-04 00:29:44.196244 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:29:44.196255 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:29:44.196266 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-04 00:29:44.196277 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:29:44.196287 | orchestrator | 2025-09-04 00:29:44.196298 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-04 00:29:44.196312 | orchestrator | Thursday 04 September 2025 00:29:38 +0000 (0:00:00.331) 0:03:38.978 **** 2025-09-04 00:29:44.196325 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-04 00:29:44.196338 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-04 00:29:44.196351 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-04 00:29:44.196363 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-04 00:29:44.196376 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-04 00:29:44.196388 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-04 00:29:44.196400 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-04 00:29:44.196413 | orchestrator | 2025-09-04 00:29:44.196426 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-04 00:29:44.196438 | orchestrator | Thursday 04 September 2025 00:29:39 +0000 (0:00:01.047) 0:03:40.026 **** 2025-09-04 00:29:44.196454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:29:44.196470 | orchestrator | 2025-09-04 00:29:44.196481 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-04 00:29:44.196492 | orchestrator | Thursday 04 September 2025 00:29:40 +0000 (0:00:00.485) 0:03:40.511 **** 2025-09-04 00:29:44.196502 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:44.196513 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:44.196524 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:44.196534 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:44.196545 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:44.196555 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:44.196581 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:44.196592 | orchestrator | 2025-09-04 00:29:44.196603 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-04 00:29:44.196614 | orchestrator | Thursday 04 September 2025 00:29:41 +0000 (0:00:01.176) 0:03:41.688 **** 2025-09-04 00:29:44.196624 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:44.196635 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:44.196646 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:44.196656 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:44.196667 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:44.196677 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:44.196694 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:44.196705 | orchestrator | 2025-09-04 00:29:44.196715 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-04 00:29:44.196726 | orchestrator | Thursday 04 September 2025 00:29:42 +0000 (0:00:00.580) 0:03:42.268 **** 2025-09-04 00:29:44.196737 | orchestrator | changed: [testbed-manager] 2025-09-04 00:29:44.196747 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:29:44.196758 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:29:44.196768 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:29:44.196779 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:29:44.196790 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:29:44.196800 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:29:44.196811 | orchestrator | 2025-09-04 00:29:44.196822 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-04 00:29:44.196832 | orchestrator | Thursday 04 September 2025 00:29:42 +0000 (0:00:00.599) 0:03:42.867 **** 2025-09-04 00:29:44.196843 | orchestrator | ok: [testbed-manager] 2025-09-04 00:29:44.196854 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:29:44.196864 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:29:44.196875 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:29:44.196886 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:29:44.196896 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:29:44.196907 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:29:44.196917 | orchestrator | 2025-09-04 00:29:44.196928 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-04 00:29:44.196939 | orchestrator | Thursday 04 September 2025 00:29:43 +0000 (0:00:00.586) 0:03:43.453 **** 2025-09-04 00:29:44.196974 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944236.2387242, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.196990 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944265.776875, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197001 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944274.7781491, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197013 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944272.2316513, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197037 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944278.8817077, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197048 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944263.8142138, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197060 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756944272.214459, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:29:44.197079 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.448971 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449085 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449101 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449137 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449185 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449198 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 00:30:00.449210 | orchestrator | 2025-09-04 00:30:00.449224 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-04 00:30:00.449254 | orchestrator | Thursday 04 September 2025 00:29:44 +0000 (0:00:00.947) 0:03:44.400 **** 2025-09-04 00:30:00.449266 | orchestrator | changed: [testbed-manager] 2025-09-04 00:30:00.449278 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:30:00.449289 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:30:00.449300 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:30:00.449310 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:30:00.449321 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:30:00.449331 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:30:00.449342 | orchestrator | 2025-09-04 00:30:00.449353 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-04 00:30:00.449364 | orchestrator | Thursday 04 September 2025 00:29:45 +0000 (0:00:01.116) 0:03:45.516 **** 2025-09-04 00:30:00.449375 | orchestrator | changed: [testbed-manager] 2025-09-04 00:30:00.449386 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:30:00.449397 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:30:00.449407 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:30:00.449436 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:30:00.449447 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:30:00.449458 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:30:00.449469 | orchestrator | 2025-09-04 00:30:00.449480 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-04 00:30:00.449491 | orchestrator | Thursday 04 September 2025 00:29:46 +0000 (0:00:01.167) 0:03:46.684 **** 2025-09-04 00:30:00.449504 | orchestrator | changed: [testbed-manager] 2025-09-04 00:30:00.449517 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:30:00.449529 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:30:00.449541 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:30:00.449555 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:30:00.449568 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:30:00.449580 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:30:00.449592 | orchestrator | 2025-09-04 00:30:00.449605 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-04 00:30:00.449618 | orchestrator | Thursday 04 September 2025 00:29:47 +0000 (0:00:01.081) 0:03:47.765 **** 2025-09-04 00:30:00.449639 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:30:00.449652 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:30:00.449665 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:30:00.449678 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:30:00.449690 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:30:00.449704 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:30:00.449718 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:30:00.449730 | orchestrator | 2025-09-04 00:30:00.449743 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-04 00:30:00.449756 | orchestrator | Thursday 04 September 2025 00:29:47 +0000 (0:00:00.299) 0:03:48.064 **** 2025-09-04 00:30:00.449768 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.449782 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.449794 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.449807 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.449820 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.449833 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:30:00.449845 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:30:00.449856 | orchestrator | 2025-09-04 00:30:00.449867 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-04 00:30:00.449877 | orchestrator | Thursday 04 September 2025 00:29:48 +0000 (0:00:00.737) 0:03:48.802 **** 2025-09-04 00:30:00.449891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:30:00.449904 | orchestrator | 2025-09-04 00:30:00.449916 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-04 00:30:00.449927 | orchestrator | Thursday 04 September 2025 00:29:49 +0000 (0:00:00.428) 0:03:49.230 **** 2025-09-04 00:30:00.449937 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.449948 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:30:00.449959 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:30:00.449969 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:30:00.449980 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:30:00.449991 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:30:00.450001 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:30:00.450012 | orchestrator | 2025-09-04 00:30:00.450083 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-04 00:30:00.450095 | orchestrator | Thursday 04 September 2025 00:29:57 +0000 (0:00:08.159) 0:03:57.390 **** 2025-09-04 00:30:00.450111 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.450122 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.450133 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.450144 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.450172 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:30:00.450183 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:30:00.450194 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.450204 | orchestrator | 2025-09-04 00:30:00.450215 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-04 00:30:00.450226 | orchestrator | Thursday 04 September 2025 00:29:58 +0000 (0:00:01.204) 0:03:58.594 **** 2025-09-04 00:30:00.450237 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.450247 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.450258 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.450268 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.450279 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:30:00.450289 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.450300 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:30:00.450310 | orchestrator | 2025-09-04 00:30:00.450321 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-04 00:30:00.450332 | orchestrator | Thursday 04 September 2025 00:29:59 +0000 (0:00:01.018) 0:03:59.612 **** 2025-09-04 00:30:00.450342 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.450360 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.450371 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.450381 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.450392 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.450402 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:30:00.450413 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:30:00.450423 | orchestrator | 2025-09-04 00:30:00.450434 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-04 00:30:00.450446 | orchestrator | Thursday 04 September 2025 00:29:59 +0000 (0:00:00.288) 0:03:59.901 **** 2025-09-04 00:30:00.450456 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.450467 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.450477 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.450488 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.450498 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.450509 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:30:00.450519 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:30:00.450529 | orchestrator | 2025-09-04 00:30:00.450540 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-04 00:30:00.450551 | orchestrator | Thursday 04 September 2025 00:30:00 +0000 (0:00:00.457) 0:04:00.359 **** 2025-09-04 00:30:00.450562 | orchestrator | ok: [testbed-manager] 2025-09-04 00:30:00.450572 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:30:00.450582 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:30:00.450593 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:30:00.450603 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:30:00.450622 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:08.874172 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:08.874318 | orchestrator | 2025-09-04 00:31:08.874346 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-04 00:31:08.874368 | orchestrator | Thursday 04 September 2025 00:30:00 +0000 (0:00:00.302) 0:04:00.662 **** 2025-09-04 00:31:08.874388 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:08.874410 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:08.874431 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:08.874453 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:08.874475 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:08.874496 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:08.874518 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:08.874539 | orchestrator | 2025-09-04 00:31:08.874562 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-04 00:31:08.874588 | orchestrator | Thursday 04 September 2025 00:30:06 +0000 (0:00:05.697) 0:04:06.360 **** 2025-09-04 00:31:08.874615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:31:08.874644 | orchestrator | 2025-09-04 00:31:08.874667 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-04 00:31:08.874693 | orchestrator | Thursday 04 September 2025 00:30:06 +0000 (0:00:00.396) 0:04:06.756 **** 2025-09-04 00:31:08.874720 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.874746 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-04 00:31:08.874772 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.874795 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:08.874820 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-04 00:31:08.874844 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:08.874868 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.874894 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-04 00:31:08.874917 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.874940 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:08.875004 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-04 00:31:08.875029 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.875052 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-04 00:31:08.875074 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:08.875094 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.875146 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-04 00:31:08.875171 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:08.875191 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:08.875211 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-04 00:31:08.875230 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-04 00:31:08.875251 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:08.875271 | orchestrator | 2025-09-04 00:31:08.875290 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-04 00:31:08.875308 | orchestrator | Thursday 04 September 2025 00:30:06 +0000 (0:00:00.329) 0:04:07.086 **** 2025-09-04 00:31:08.875347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:31:08.875366 | orchestrator | 2025-09-04 00:31:08.875384 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-04 00:31:08.875401 | orchestrator | Thursday 04 September 2025 00:30:07 +0000 (0:00:00.401) 0:04:07.487 **** 2025-09-04 00:31:08.875418 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-04 00:31:08.875433 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:08.875451 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-04 00:31:08.875469 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-04 00:31:08.875485 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:08.875502 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:08.875518 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-04 00:31:08.875534 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-04 00:31:08.875551 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:08.875568 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-04 00:31:08.875585 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:08.875600 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:08.875617 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-04 00:31:08.875634 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:08.875651 | orchestrator | 2025-09-04 00:31:08.875667 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-04 00:31:08.875684 | orchestrator | Thursday 04 September 2025 00:30:07 +0000 (0:00:00.331) 0:04:07.819 **** 2025-09-04 00:31:08.875701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:31:08.875719 | orchestrator | 2025-09-04 00:31:08.875737 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-04 00:31:08.875755 | orchestrator | Thursday 04 September 2025 00:30:08 +0000 (0:00:00.417) 0:04:08.236 **** 2025-09-04 00:31:08.875772 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.875817 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.875838 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.875858 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.875877 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.875895 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.875914 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.875952 | orchestrator | 2025-09-04 00:31:08.875971 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-04 00:31:08.875987 | orchestrator | Thursday 04 September 2025 00:30:41 +0000 (0:00:33.884) 0:04:42.120 **** 2025-09-04 00:31:08.876002 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.876018 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.876035 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.876052 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.876070 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.876087 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.876106 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.876155 | orchestrator | 2025-09-04 00:31:08.876177 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-04 00:31:08.876189 | orchestrator | Thursday 04 September 2025 00:30:49 +0000 (0:00:08.067) 0:04:50.188 **** 2025-09-04 00:31:08.876200 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.876211 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.876222 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.876232 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.876243 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.876254 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.876264 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.876275 | orchestrator | 2025-09-04 00:31:08.876285 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-04 00:31:08.876296 | orchestrator | Thursday 04 September 2025 00:30:57 +0000 (0:00:07.531) 0:04:57.719 **** 2025-09-04 00:31:08.876307 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:08.876318 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:08.876329 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:08.876340 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:08.876350 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:08.876361 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:08.876372 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:08.876382 | orchestrator | 2025-09-04 00:31:08.876393 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-04 00:31:08.876405 | orchestrator | Thursday 04 September 2025 00:30:59 +0000 (0:00:01.796) 0:04:59.516 **** 2025-09-04 00:31:08.876416 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.876426 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.876437 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.876448 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.876458 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.876469 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.876479 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.876490 | orchestrator | 2025-09-04 00:31:08.876501 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-04 00:31:08.876512 | orchestrator | Thursday 04 September 2025 00:31:04 +0000 (0:00:05.635) 0:05:05.151 **** 2025-09-04 00:31:08.876523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:31:08.876536 | orchestrator | 2025-09-04 00:31:08.876557 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-04 00:31:08.876568 | orchestrator | Thursday 04 September 2025 00:31:05 +0000 (0:00:00.569) 0:05:05.721 **** 2025-09-04 00:31:08.876579 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.876590 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.876600 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.876611 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.876621 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.876632 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.876643 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.876654 | orchestrator | 2025-09-04 00:31:08.876665 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-04 00:31:08.876685 | orchestrator | Thursday 04 September 2025 00:31:06 +0000 (0:00:00.703) 0:05:06.425 **** 2025-09-04 00:31:08.876696 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:08.876707 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:08.876718 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:08.876728 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:08.876739 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:08.876750 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:08.876760 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:08.876771 | orchestrator | 2025-09-04 00:31:08.876782 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-04 00:31:08.876793 | orchestrator | Thursday 04 September 2025 00:31:07 +0000 (0:00:01.621) 0:05:08.047 **** 2025-09-04 00:31:08.876803 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:08.876814 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:08.876825 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:08.876835 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:08.876846 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:08.876857 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:08.876867 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:08.876878 | orchestrator | 2025-09-04 00:31:08.876889 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-04 00:31:08.876900 | orchestrator | Thursday 04 September 2025 00:31:08 +0000 (0:00:00.748) 0:05:08.795 **** 2025-09-04 00:31:08.876910 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:08.876921 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:08.876932 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:08.876943 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:08.876953 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:08.876964 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:08.876974 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:08.876985 | orchestrator | 2025-09-04 00:31:08.876996 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-04 00:31:08.877019 | orchestrator | Thursday 04 September 2025 00:31:08 +0000 (0:00:00.287) 0:05:09.082 **** 2025-09-04 00:31:35.156366 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:35.156488 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:35.156504 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:35.156516 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:35.156527 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:35.156538 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:35.156549 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:35.156560 | orchestrator | 2025-09-04 00:31:35.156572 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-04 00:31:35.156584 | orchestrator | Thursday 04 September 2025 00:31:09 +0000 (0:00:00.396) 0:05:09.479 **** 2025-09-04 00:31:35.156595 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.156607 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:35.156617 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:35.156628 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:35.156639 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:35.156649 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:35.156660 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:35.156670 | orchestrator | 2025-09-04 00:31:35.156681 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-04 00:31:35.156693 | orchestrator | Thursday 04 September 2025 00:31:09 +0000 (0:00:00.279) 0:05:09.759 **** 2025-09-04 00:31:35.156703 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:35.156714 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:35.156725 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:35.156736 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:35.156746 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:35.156757 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:35.156795 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:35.156807 | orchestrator | 2025-09-04 00:31:35.156818 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-04 00:31:35.156829 | orchestrator | Thursday 04 September 2025 00:31:09 +0000 (0:00:00.296) 0:05:10.056 **** 2025-09-04 00:31:35.156839 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.156850 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:35.156861 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:35.156871 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:35.156881 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:35.156894 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:35.156905 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:35.156918 | orchestrator | 2025-09-04 00:31:35.156930 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-04 00:31:35.156942 | orchestrator | Thursday 04 September 2025 00:31:10 +0000 (0:00:00.292) 0:05:10.348 **** 2025-09-04 00:31:35.156955 | orchestrator | ok: [testbed-manager] =>  2025-09-04 00:31:35.156968 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.156980 | orchestrator | ok: [testbed-node-0] =>  2025-09-04 00:31:35.156993 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157005 | orchestrator | ok: [testbed-node-1] =>  2025-09-04 00:31:35.157017 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157029 | orchestrator | ok: [testbed-node-2] =>  2025-09-04 00:31:35.157040 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157053 | orchestrator | ok: [testbed-node-3] =>  2025-09-04 00:31:35.157065 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157077 | orchestrator | ok: [testbed-node-4] =>  2025-09-04 00:31:35.157090 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157103 | orchestrator | ok: [testbed-node-5] =>  2025-09-04 00:31:35.157176 | orchestrator |  docker_version: 5:27.5.1 2025-09-04 00:31:35.157188 | orchestrator | 2025-09-04 00:31:35.157201 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-04 00:31:35.157215 | orchestrator | Thursday 04 September 2025 00:31:10 +0000 (0:00:00.323) 0:05:10.671 **** 2025-09-04 00:31:35.157228 | orchestrator | ok: [testbed-manager] =>  2025-09-04 00:31:35.157241 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157254 | orchestrator | ok: [testbed-node-0] =>  2025-09-04 00:31:35.157264 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157274 | orchestrator | ok: [testbed-node-1] =>  2025-09-04 00:31:35.157285 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157295 | orchestrator | ok: [testbed-node-2] =>  2025-09-04 00:31:35.157306 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157316 | orchestrator | ok: [testbed-node-3] =>  2025-09-04 00:31:35.157327 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157337 | orchestrator | ok: [testbed-node-4] =>  2025-09-04 00:31:35.157348 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157358 | orchestrator | ok: [testbed-node-5] =>  2025-09-04 00:31:35.157369 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-04 00:31:35.157380 | orchestrator | 2025-09-04 00:31:35.157390 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-04 00:31:35.157401 | orchestrator | Thursday 04 September 2025 00:31:10 +0000 (0:00:00.291) 0:05:10.963 **** 2025-09-04 00:31:35.157412 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:35.157422 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:35.157433 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:35.157443 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:35.157454 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:35.157464 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:35.157474 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:35.157485 | orchestrator | 2025-09-04 00:31:35.157496 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-04 00:31:35.157506 | orchestrator | Thursday 04 September 2025 00:31:11 +0000 (0:00:00.275) 0:05:11.239 **** 2025-09-04 00:31:35.157525 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:35.157536 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:35.157546 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:35.157557 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:35.157567 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:35.157578 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:35.157588 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:35.157599 | orchestrator | 2025-09-04 00:31:35.157628 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-04 00:31:35.157639 | orchestrator | Thursday 04 September 2025 00:31:11 +0000 (0:00:00.296) 0:05:11.535 **** 2025-09-04 00:31:35.157670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:31:35.157684 | orchestrator | 2025-09-04 00:31:35.157696 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-04 00:31:35.157706 | orchestrator | Thursday 04 September 2025 00:31:11 +0000 (0:00:00.427) 0:05:11.962 **** 2025-09-04 00:31:35.157717 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:35.157728 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.157739 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:35.157750 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:35.157760 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:35.157771 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:35.157782 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:35.157793 | orchestrator | 2025-09-04 00:31:35.157804 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-04 00:31:35.157814 | orchestrator | Thursday 04 September 2025 00:31:12 +0000 (0:00:00.818) 0:05:12.781 **** 2025-09-04 00:31:35.157825 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:31:35.157836 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:31:35.157846 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:31:35.157857 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.157868 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:31:35.157878 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:31:35.157889 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:31:35.157899 | orchestrator | 2025-09-04 00:31:35.157910 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-04 00:31:35.157922 | orchestrator | Thursday 04 September 2025 00:31:15 +0000 (0:00:03.146) 0:05:15.928 **** 2025-09-04 00:31:35.157933 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-04 00:31:35.157944 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-04 00:31:35.157955 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-04 00:31:35.157966 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-04 00:31:35.157977 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-04 00:31:35.157988 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-04 00:31:35.157998 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:31:35.158009 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-04 00:31:35.158079 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:31:35.158090 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-04 00:31:35.158101 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-04 00:31:35.158137 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-04 00:31:35.158148 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-04 00:31:35.158159 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-04 00:31:35.158169 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:31:35.158180 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-04 00:31:35.158191 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-04 00:31:35.158210 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-04 00:31:35.158221 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:31:35.158231 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-04 00:31:35.158242 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-04 00:31:35.158253 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:31:35.158263 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-04 00:31:35.158280 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:31:35.158291 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-04 00:31:35.158301 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-04 00:31:35.158312 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-04 00:31:35.158323 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:31:35.158334 | orchestrator | 2025-09-04 00:31:35.158345 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-04 00:31:35.158356 | orchestrator | Thursday 04 September 2025 00:31:16 +0000 (0:00:00.592) 0:05:16.521 **** 2025-09-04 00:31:35.158367 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.158378 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:35.158388 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:35.158399 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:35.158409 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:35.158420 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:35.158430 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:35.158441 | orchestrator | 2025-09-04 00:31:35.158452 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-04 00:31:35.158463 | orchestrator | Thursday 04 September 2025 00:31:22 +0000 (0:00:06.475) 0:05:22.996 **** 2025-09-04 00:31:35.158474 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:35.158484 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:35.158495 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.158506 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:35.158516 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:35.158527 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:35.158537 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:35.158548 | orchestrator | 2025-09-04 00:31:35.158559 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-04 00:31:35.158569 | orchestrator | Thursday 04 September 2025 00:31:23 +0000 (0:00:01.174) 0:05:24.171 **** 2025-09-04 00:31:35.158580 | orchestrator | ok: [testbed-manager] 2025-09-04 00:31:35.158591 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:35.158602 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:35.158612 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:31:35.158623 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:31:35.158633 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:31:35.158644 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:31:35.158655 | orchestrator | 2025-09-04 00:31:35.158666 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-04 00:31:35.158677 | orchestrator | Thursday 04 September 2025 00:31:31 +0000 (0:00:07.953) 0:05:32.125 **** 2025-09-04 00:31:35.158688 | orchestrator | changed: [testbed-manager] 2025-09-04 00:31:35.158698 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:31:35.158709 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:31:35.158728 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.544834 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.544958 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.544974 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.544986 | orchestrator | 2025-09-04 00:32:18.544999 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-04 00:32:18.545011 | orchestrator | Thursday 04 September 2025 00:31:35 +0000 (0:00:03.242) 0:05:35.367 **** 2025-09-04 00:32:18.545022 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.545035 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545073 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545118 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545129 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545140 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545150 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545161 | orchestrator | 2025-09-04 00:32:18.545172 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-04 00:32:18.545183 | orchestrator | Thursday 04 September 2025 00:31:36 +0000 (0:00:01.242) 0:05:36.609 **** 2025-09-04 00:32:18.545193 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.545204 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545214 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545225 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545235 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545245 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545256 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545266 | orchestrator | 2025-09-04 00:32:18.545277 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-04 00:32:18.545287 | orchestrator | Thursday 04 September 2025 00:31:37 +0000 (0:00:01.185) 0:05:37.795 **** 2025-09-04 00:32:18.545298 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.545308 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.545319 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.545329 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.545340 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.545350 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.545361 | orchestrator | changed: [testbed-manager] 2025-09-04 00:32:18.545373 | orchestrator | 2025-09-04 00:32:18.545386 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-04 00:32:18.545399 | orchestrator | Thursday 04 September 2025 00:31:38 +0000 (0:00:00.778) 0:05:38.574 **** 2025-09-04 00:32:18.545413 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.545425 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545437 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545449 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545462 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545475 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545488 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545500 | orchestrator | 2025-09-04 00:32:18.545513 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-04 00:32:18.545525 | orchestrator | Thursday 04 September 2025 00:31:47 +0000 (0:00:09.360) 0:05:47.934 **** 2025-09-04 00:32:18.545538 | orchestrator | changed: [testbed-manager] 2025-09-04 00:32:18.545550 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545562 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545574 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545587 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545599 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545611 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545624 | orchestrator | 2025-09-04 00:32:18.545637 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-04 00:32:18.545663 | orchestrator | Thursday 04 September 2025 00:31:48 +0000 (0:00:00.913) 0:05:48.848 **** 2025-09-04 00:32:18.545678 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.545691 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545704 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545716 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545729 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545740 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545750 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545761 | orchestrator | 2025-09-04 00:32:18.545772 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-04 00:32:18.545782 | orchestrator | Thursday 04 September 2025 00:31:57 +0000 (0:00:08.701) 0:05:57.550 **** 2025-09-04 00:32:18.545801 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.545812 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.545822 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.545832 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.545843 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.545853 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.545864 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.545874 | orchestrator | 2025-09-04 00:32:18.545885 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-04 00:32:18.545896 | orchestrator | Thursday 04 September 2025 00:32:08 +0000 (0:00:11.245) 0:06:08.795 **** 2025-09-04 00:32:18.545906 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-04 00:32:18.545917 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-04 00:32:18.545928 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-04 00:32:18.545938 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-04 00:32:18.545949 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-04 00:32:18.545959 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-04 00:32:18.545970 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-04 00:32:18.545980 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-04 00:32:18.545990 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-04 00:32:18.546001 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-04 00:32:18.546011 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-04 00:32:18.546074 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-04 00:32:18.546102 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-04 00:32:18.546114 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-04 00:32:18.546124 | orchestrator | 2025-09-04 00:32:18.546135 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-04 00:32:18.546163 | orchestrator | Thursday 04 September 2025 00:32:09 +0000 (0:00:01.140) 0:06:09.936 **** 2025-09-04 00:32:18.546174 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.546185 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.546196 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.546207 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.546217 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.546228 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.546238 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.546249 | orchestrator | 2025-09-04 00:32:18.546260 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-04 00:32:18.546271 | orchestrator | Thursday 04 September 2025 00:32:10 +0000 (0:00:00.490) 0:06:10.427 **** 2025-09-04 00:32:18.546282 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.546292 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:18.546303 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:18.546314 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:18.546324 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:18.546335 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:18.546345 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:18.546356 | orchestrator | 2025-09-04 00:32:18.546367 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-04 00:32:18.546379 | orchestrator | Thursday 04 September 2025 00:32:14 +0000 (0:00:03.920) 0:06:14.347 **** 2025-09-04 00:32:18.546389 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.546400 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.546411 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.546421 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.546432 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.546443 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.546456 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.546485 | orchestrator | 2025-09-04 00:32:18.546505 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-04 00:32:18.546534 | orchestrator | Thursday 04 September 2025 00:32:14 +0000 (0:00:00.454) 0:06:14.802 **** 2025-09-04 00:32:18.546555 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-04 00:32:18.546571 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-04 00:32:18.546588 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.546604 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-04 00:32:18.546621 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-04 00:32:18.546637 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.546653 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-04 00:32:18.546670 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-04 00:32:18.546688 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-04 00:32:18.546706 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-04 00:32:18.546723 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.546741 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-04 00:32:18.546759 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-04 00:32:18.546777 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.546793 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-04 00:32:18.546813 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-04 00:32:18.546825 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.546835 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.546845 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-04 00:32:18.546856 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-04 00:32:18.546867 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.546877 | orchestrator | 2025-09-04 00:32:18.546888 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-04 00:32:18.546899 | orchestrator | Thursday 04 September 2025 00:32:15 +0000 (0:00:00.723) 0:06:15.526 **** 2025-09-04 00:32:18.546909 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.546920 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.546930 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.546941 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.546951 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.546962 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.546972 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.546983 | orchestrator | 2025-09-04 00:32:18.546993 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-04 00:32:18.547004 | orchestrator | Thursday 04 September 2025 00:32:15 +0000 (0:00:00.494) 0:06:16.020 **** 2025-09-04 00:32:18.547014 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.547025 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.547035 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.547046 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.547056 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.547066 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.547099 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.547111 | orchestrator | 2025-09-04 00:32:18.547121 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-04 00:32:18.547132 | orchestrator | Thursday 04 September 2025 00:32:16 +0000 (0:00:00.515) 0:06:16.535 **** 2025-09-04 00:32:18.547142 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:18.547153 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:18.547163 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:32:18.547173 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:32:18.547184 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:32:18.547202 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:32:18.547213 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:32:18.547223 | orchestrator | 2025-09-04 00:32:18.547234 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-04 00:32:18.547244 | orchestrator | Thursday 04 September 2025 00:32:16 +0000 (0:00:00.507) 0:06:17.042 **** 2025-09-04 00:32:18.547255 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:18.547275 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.357257 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.357381 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.357397 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.357409 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.357420 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.357431 | orchestrator | 2025-09-04 00:32:39.357443 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-04 00:32:39.357457 | orchestrator | Thursday 04 September 2025 00:32:18 +0000 (0:00:01.712) 0:06:18.755 **** 2025-09-04 00:32:39.357468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:32:39.357482 | orchestrator | 2025-09-04 00:32:39.357493 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-04 00:32:39.357504 | orchestrator | Thursday 04 September 2025 00:32:19 +0000 (0:00:01.023) 0:06:19.778 **** 2025-09-04 00:32:39.357515 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.357526 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.357537 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.357548 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.357558 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.357569 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.357580 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.357590 | orchestrator | 2025-09-04 00:32:39.357601 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-04 00:32:39.357612 | orchestrator | Thursday 04 September 2025 00:32:20 +0000 (0:00:00.794) 0:06:20.572 **** 2025-09-04 00:32:39.357623 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.357633 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.357644 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.357656 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.357667 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.357677 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.357688 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.357698 | orchestrator | 2025-09-04 00:32:39.357709 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-04 00:32:39.357720 | orchestrator | Thursday 04 September 2025 00:32:21 +0000 (0:00:00.882) 0:06:21.455 **** 2025-09-04 00:32:39.357730 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.357741 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.357751 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.357762 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.357773 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.357785 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.357798 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.357811 | orchestrator | 2025-09-04 00:32:39.357824 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-04 00:32:39.357837 | orchestrator | Thursday 04 September 2025 00:32:22 +0000 (0:00:01.303) 0:06:22.758 **** 2025-09-04 00:32:39.357850 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:39.357861 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.357874 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.357886 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.357898 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.357911 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.357946 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.357959 | orchestrator | 2025-09-04 00:32:39.357972 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-04 00:32:39.357985 | orchestrator | Thursday 04 September 2025 00:32:24 +0000 (0:00:01.496) 0:06:24.255 **** 2025-09-04 00:32:39.357997 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.358011 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.358100 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.358113 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.358126 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.358138 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.358150 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.358160 | orchestrator | 2025-09-04 00:32:39.358171 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-04 00:32:39.358182 | orchestrator | Thursday 04 September 2025 00:32:25 +0000 (0:00:01.320) 0:06:25.576 **** 2025-09-04 00:32:39.358192 | orchestrator | changed: [testbed-manager] 2025-09-04 00:32:39.358203 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.358213 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.358224 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.358234 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.358245 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.358255 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.358266 | orchestrator | 2025-09-04 00:32:39.358276 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-04 00:32:39.358287 | orchestrator | Thursday 04 September 2025 00:32:26 +0000 (0:00:01.293) 0:06:26.869 **** 2025-09-04 00:32:39.358298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:32:39.358308 | orchestrator | 2025-09-04 00:32:39.358319 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-04 00:32:39.358330 | orchestrator | Thursday 04 September 2025 00:32:27 +0000 (0:00:00.999) 0:06:27.869 **** 2025-09-04 00:32:39.358340 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.358351 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.358362 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.358373 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.358383 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.358394 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.358404 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.358414 | orchestrator | 2025-09-04 00:32:39.358425 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-04 00:32:39.358436 | orchestrator | Thursday 04 September 2025 00:32:28 +0000 (0:00:01.286) 0:06:29.156 **** 2025-09-04 00:32:39.358446 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.358457 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.358488 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.358500 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.358510 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.358521 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.358531 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.358542 | orchestrator | 2025-09-04 00:32:39.358552 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-04 00:32:39.358563 | orchestrator | Thursday 04 September 2025 00:32:29 +0000 (0:00:01.054) 0:06:30.210 **** 2025-09-04 00:32:39.358574 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.358585 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.358595 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.358605 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.358616 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.358626 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.358636 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.358647 | orchestrator | 2025-09-04 00:32:39.358667 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-04 00:32:39.358678 | orchestrator | Thursday 04 September 2025 00:32:31 +0000 (0:00:01.086) 0:06:31.297 **** 2025-09-04 00:32:39.358689 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.358699 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.358710 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.358720 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.358730 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:32:39.358741 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:32:39.358751 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:32:39.358762 | orchestrator | 2025-09-04 00:32:39.358773 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-04 00:32:39.358783 | orchestrator | Thursday 04 September 2025 00:32:32 +0000 (0:00:01.057) 0:06:32.354 **** 2025-09-04 00:32:39.358794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:32:39.358805 | orchestrator | 2025-09-04 00:32:39.358816 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.358826 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:01.052) 0:06:33.407 **** 2025-09-04 00:32:39.358837 | orchestrator | 2025-09-04 00:32:39.358848 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.358858 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.039) 0:06:33.446 **** 2025-09-04 00:32:39.358869 | orchestrator | 2025-09-04 00:32:39.358880 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.358890 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.038) 0:06:33.484 **** 2025-09-04 00:32:39.358901 | orchestrator | 2025-09-04 00:32:39.358911 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.358922 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.045) 0:06:33.529 **** 2025-09-04 00:32:39.358933 | orchestrator | 2025-09-04 00:32:39.358961 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.358973 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.038) 0:06:33.568 **** 2025-09-04 00:32:39.358983 | orchestrator | 2025-09-04 00:32:39.358994 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.359010 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.038) 0:06:33.606 **** 2025-09-04 00:32:39.359020 | orchestrator | 2025-09-04 00:32:39.359031 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-04 00:32:39.359042 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.047) 0:06:33.654 **** 2025-09-04 00:32:39.359052 | orchestrator | 2025-09-04 00:32:39.359063 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-04 00:32:39.359091 | orchestrator | Thursday 04 September 2025 00:32:33 +0000 (0:00:00.039) 0:06:33.693 **** 2025-09-04 00:32:39.359102 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:32:39.359113 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:32:39.359123 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:32:39.359134 | orchestrator | 2025-09-04 00:32:39.359145 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-04 00:32:39.359156 | orchestrator | Thursday 04 September 2025 00:32:34 +0000 (0:00:01.118) 0:06:34.811 **** 2025-09-04 00:32:39.359166 | orchestrator | changed: [testbed-manager] 2025-09-04 00:32:39.359177 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.359188 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.359198 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.359209 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.359219 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.359230 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.359240 | orchestrator | 2025-09-04 00:32:39.359258 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-04 00:32:39.359269 | orchestrator | Thursday 04 September 2025 00:32:35 +0000 (0:00:01.207) 0:06:36.018 **** 2025-09-04 00:32:39.359279 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:32:39.359290 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.359300 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.359311 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.359321 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:32:39.359331 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:32:39.359342 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:32:39.359353 | orchestrator | 2025-09-04 00:32:39.359364 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-04 00:32:39.359374 | orchestrator | Thursday 04 September 2025 00:32:38 +0000 (0:00:02.493) 0:06:38.512 **** 2025-09-04 00:32:39.359385 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:32:39.359396 | orchestrator | 2025-09-04 00:32:39.359406 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-04 00:32:39.359417 | orchestrator | Thursday 04 September 2025 00:32:38 +0000 (0:00:00.092) 0:06:38.605 **** 2025-09-04 00:32:39.359428 | orchestrator | ok: [testbed-manager] 2025-09-04 00:32:39.359438 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:32:39.359449 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:32:39.359459 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:32:39.359477 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:05.214529 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:05.214655 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:05.214678 | orchestrator | 2025-09-04 00:33:05.214699 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-04 00:33:05.214727 | orchestrator | Thursday 04 September 2025 00:32:39 +0000 (0:00:00.959) 0:06:39.565 **** 2025-09-04 00:33:05.214752 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.214772 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.214791 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.214810 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.214826 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.214836 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.214848 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.214859 | orchestrator | 2025-09-04 00:33:05.214870 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-04 00:33:05.214881 | orchestrator | Thursday 04 September 2025 00:32:39 +0000 (0:00:00.506) 0:06:40.071 **** 2025-09-04 00:33:05.214893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:33:05.214906 | orchestrator | 2025-09-04 00:33:05.214918 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-04 00:33:05.214929 | orchestrator | Thursday 04 September 2025 00:32:40 +0000 (0:00:01.027) 0:06:41.099 **** 2025-09-04 00:33:05.214940 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.214953 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:05.214963 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:05.214974 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:05.214985 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:05.214996 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:05.215006 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:05.215017 | orchestrator | 2025-09-04 00:33:05.215028 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-04 00:33:05.215039 | orchestrator | Thursday 04 September 2025 00:32:41 +0000 (0:00:00.769) 0:06:41.869 **** 2025-09-04 00:33:05.215050 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-04 00:33:05.215108 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-04 00:33:05.215120 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-04 00:33:05.215163 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-04 00:33:05.215174 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-04 00:33:05.215185 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-04 00:33:05.215195 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-04 00:33:05.215206 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-04 00:33:05.215218 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-04 00:33:05.215229 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-04 00:33:05.215239 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-04 00:33:05.215250 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-04 00:33:05.215261 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-04 00:33:05.215288 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-04 00:33:05.215299 | orchestrator | 2025-09-04 00:33:05.215310 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-04 00:33:05.215321 | orchestrator | Thursday 04 September 2025 00:32:44 +0000 (0:00:02.443) 0:06:44.312 **** 2025-09-04 00:33:05.215332 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.215343 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.215354 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.215364 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.215375 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.215386 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.215396 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.215407 | orchestrator | 2025-09-04 00:33:05.215418 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-04 00:33:05.215429 | orchestrator | Thursday 04 September 2025 00:32:44 +0000 (0:00:00.472) 0:06:44.785 **** 2025-09-04 00:33:05.215442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:33:05.215458 | orchestrator | 2025-09-04 00:33:05.215477 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-04 00:33:05.215496 | orchestrator | Thursday 04 September 2025 00:32:45 +0000 (0:00:00.987) 0:06:45.773 **** 2025-09-04 00:33:05.215520 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.215544 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:05.215562 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:05.215582 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:05.215600 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:05.215619 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:05.215630 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:05.215641 | orchestrator | 2025-09-04 00:33:05.215652 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-04 00:33:05.215663 | orchestrator | Thursday 04 September 2025 00:32:46 +0000 (0:00:00.818) 0:06:46.591 **** 2025-09-04 00:33:05.215674 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.215685 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:05.215695 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:05.215706 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:05.215717 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:05.215727 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:05.215738 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:05.215748 | orchestrator | 2025-09-04 00:33:05.215759 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-04 00:33:05.215788 | orchestrator | Thursday 04 September 2025 00:32:47 +0000 (0:00:00.782) 0:06:47.374 **** 2025-09-04 00:33:05.215800 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.215810 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.215821 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.215849 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.215861 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.215871 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.215882 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.215893 | orchestrator | 2025-09-04 00:33:05.215903 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-04 00:33:05.215914 | orchestrator | Thursday 04 September 2025 00:32:47 +0000 (0:00:00.478) 0:06:47.852 **** 2025-09-04 00:33:05.215925 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.215936 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:05.215946 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:05.215957 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:05.215968 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:05.215978 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:05.215989 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:05.215999 | orchestrator | 2025-09-04 00:33:05.216010 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-04 00:33:05.216021 | orchestrator | Thursday 04 September 2025 00:32:49 +0000 (0:00:01.677) 0:06:49.530 **** 2025-09-04 00:33:05.216032 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.216043 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.216081 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.216094 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.216105 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.216116 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.216126 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.216137 | orchestrator | 2025-09-04 00:33:05.216148 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-04 00:33:05.216159 | orchestrator | Thursday 04 September 2025 00:32:49 +0000 (0:00:00.506) 0:06:50.037 **** 2025-09-04 00:33:05.216169 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.216180 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:05.216191 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:05.216202 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:05.216212 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:05.216223 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:05.216233 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:05.216244 | orchestrator | 2025-09-04 00:33:05.216255 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-04 00:33:05.216266 | orchestrator | Thursday 04 September 2025 00:32:58 +0000 (0:00:08.268) 0:06:58.306 **** 2025-09-04 00:33:05.216276 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.216287 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:05.216298 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:05.216308 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:05.216319 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:05.216329 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:05.216340 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:05.216351 | orchestrator | 2025-09-04 00:33:05.216361 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-04 00:33:05.216372 | orchestrator | Thursday 04 September 2025 00:32:59 +0000 (0:00:01.277) 0:06:59.583 **** 2025-09-04 00:33:05.216383 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.216393 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:05.216404 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:05.216415 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:05.216432 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:05.216443 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:05.216453 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:05.216464 | orchestrator | 2025-09-04 00:33:05.216475 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-04 00:33:05.216486 | orchestrator | Thursday 04 September 2025 00:33:00 +0000 (0:00:01.615) 0:07:01.199 **** 2025-09-04 00:33:05.216497 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.216514 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:05.216525 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:05.216536 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:05.216546 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:05.216557 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:05.216568 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:05.216578 | orchestrator | 2025-09-04 00:33:05.216597 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-04 00:33:05.216616 | orchestrator | Thursday 04 September 2025 00:33:02 +0000 (0:00:01.895) 0:07:03.094 **** 2025-09-04 00:33:05.216636 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:05.216654 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:05.216672 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:05.216691 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:05.216710 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:05.216724 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:05.216734 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:05.216745 | orchestrator | 2025-09-04 00:33:05.216756 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-04 00:33:05.216767 | orchestrator | Thursday 04 September 2025 00:33:03 +0000 (0:00:00.827) 0:07:03.922 **** 2025-09-04 00:33:05.216777 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.216788 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.216799 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.216809 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.216820 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.216831 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.216842 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.216852 | orchestrator | 2025-09-04 00:33:05.216863 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-04 00:33:05.216874 | orchestrator | Thursday 04 September 2025 00:33:04 +0000 (0:00:01.002) 0:07:04.924 **** 2025-09-04 00:33:05.216884 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:05.216895 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:05.216905 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:05.216916 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:05.216934 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:05.216952 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:05.216970 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:05.216988 | orchestrator | 2025-09-04 00:33:05.217017 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-04 00:33:37.163302 | orchestrator | Thursday 04 September 2025 00:33:05 +0000 (0:00:00.496) 0:07:05.421 **** 2025-09-04 00:33:37.163426 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.163444 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.163456 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.163467 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.163477 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.163489 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.163500 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.163511 | orchestrator | 2025-09-04 00:33:37.163523 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-04 00:33:37.163534 | orchestrator | Thursday 04 September 2025 00:33:05 +0000 (0:00:00.510) 0:07:05.931 **** 2025-09-04 00:33:37.163545 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.163556 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.163567 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.163577 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.163588 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.163598 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.163608 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.163619 | orchestrator | 2025-09-04 00:33:37.163630 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-04 00:33:37.163641 | orchestrator | Thursday 04 September 2025 00:33:06 +0000 (0:00:00.475) 0:07:06.407 **** 2025-09-04 00:33:37.163678 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.163689 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.163700 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.163710 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.163720 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.163731 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.163741 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.163751 | orchestrator | 2025-09-04 00:33:37.163762 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-04 00:33:37.163772 | orchestrator | Thursday 04 September 2025 00:33:06 +0000 (0:00:00.456) 0:07:06.864 **** 2025-09-04 00:33:37.163783 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.163793 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.163804 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.163814 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.163824 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.163835 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.163847 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.163859 | orchestrator | 2025-09-04 00:33:37.163871 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-04 00:33:37.163884 | orchestrator | Thursday 04 September 2025 00:33:12 +0000 (0:00:05.765) 0:07:12.630 **** 2025-09-04 00:33:37.163897 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:37.163910 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:37.163923 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:37.163935 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:37.163947 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:37.163960 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:37.163972 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:37.163985 | orchestrator | 2025-09-04 00:33:37.163997 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-04 00:33:37.164010 | orchestrator | Thursday 04 September 2025 00:33:12 +0000 (0:00:00.489) 0:07:13.119 **** 2025-09-04 00:33:37.164068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:33:37.164086 | orchestrator | 2025-09-04 00:33:37.164099 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-04 00:33:37.164112 | orchestrator | Thursday 04 September 2025 00:33:13 +0000 (0:00:00.793) 0:07:13.913 **** 2025-09-04 00:33:37.164124 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.164137 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.164150 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.164163 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.164173 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.164184 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.164194 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.164205 | orchestrator | 2025-09-04 00:33:37.164215 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-04 00:33:37.164226 | orchestrator | Thursday 04 September 2025 00:33:15 +0000 (0:00:01.942) 0:07:15.855 **** 2025-09-04 00:33:37.164237 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.164247 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.164258 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.164268 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.164278 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.164289 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.164299 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.164310 | orchestrator | 2025-09-04 00:33:37.164320 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-04 00:33:37.164331 | orchestrator | Thursday 04 September 2025 00:33:16 +0000 (0:00:01.052) 0:07:16.908 **** 2025-09-04 00:33:37.164342 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.164352 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.164371 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.164381 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.164392 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.164402 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.164413 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.164423 | orchestrator | 2025-09-04 00:33:37.164434 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-04 00:33:37.164445 | orchestrator | Thursday 04 September 2025 00:33:17 +0000 (0:00:00.840) 0:07:17.749 **** 2025-09-04 00:33:37.164456 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164468 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164479 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164508 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164520 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164531 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164541 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-04 00:33:37.164552 | orchestrator | 2025-09-04 00:33:37.164563 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-04 00:33:37.164574 | orchestrator | Thursday 04 September 2025 00:33:19 +0000 (0:00:01.627) 0:07:19.377 **** 2025-09-04 00:33:37.164585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:33:37.164596 | orchestrator | 2025-09-04 00:33:37.164607 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-04 00:33:37.164617 | orchestrator | Thursday 04 September 2025 00:33:20 +0000 (0:00:00.992) 0:07:20.369 **** 2025-09-04 00:33:37.164628 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:37.164638 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:37.164649 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:37.164659 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:37.164670 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:37.164680 | orchestrator | changed: [testbed-manager] 2025-09-04 00:33:37.164691 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:37.164701 | orchestrator | 2025-09-04 00:33:37.164712 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-04 00:33:37.164722 | orchestrator | Thursday 04 September 2025 00:33:29 +0000 (0:00:09.131) 0:07:29.501 **** 2025-09-04 00:33:37.164733 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.164744 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.164754 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.164765 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.164776 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.164786 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.164796 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.164807 | orchestrator | 2025-09-04 00:33:37.164818 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-04 00:33:37.164828 | orchestrator | Thursday 04 September 2025 00:33:31 +0000 (0:00:01.941) 0:07:31.443 **** 2025-09-04 00:33:37.164839 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.164857 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.164867 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.164878 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.164888 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.164898 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.164909 | orchestrator | 2025-09-04 00:33:37.164925 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-04 00:33:37.164936 | orchestrator | Thursday 04 September 2025 00:33:32 +0000 (0:00:01.281) 0:07:32.725 **** 2025-09-04 00:33:37.164947 | orchestrator | changed: [testbed-manager] 2025-09-04 00:33:37.164957 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:37.164968 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:37.164979 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:37.164989 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:37.165000 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:37.165010 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:37.165021 | orchestrator | 2025-09-04 00:33:37.165031 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-04 00:33:37.165063 | orchestrator | 2025-09-04 00:33:37.165074 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-04 00:33:37.165085 | orchestrator | Thursday 04 September 2025 00:33:33 +0000 (0:00:01.209) 0:07:33.934 **** 2025-09-04 00:33:37.165096 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:33:37.165106 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:33:37.165117 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:33:37.165127 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:33:37.165138 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:33:37.165148 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:33:37.165159 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:33:37.165170 | orchestrator | 2025-09-04 00:33:37.165180 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-04 00:33:37.165191 | orchestrator | 2025-09-04 00:33:37.165202 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-04 00:33:37.165212 | orchestrator | Thursday 04 September 2025 00:33:34 +0000 (0:00:00.495) 0:07:34.430 **** 2025-09-04 00:33:37.165223 | orchestrator | changed: [testbed-manager] 2025-09-04 00:33:37.165233 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:33:37.165244 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:33:37.165254 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:33:37.165265 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:33:37.165275 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:33:37.165286 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:33:37.165296 | orchestrator | 2025-09-04 00:33:37.165307 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-04 00:33:37.165318 | orchestrator | Thursday 04 September 2025 00:33:35 +0000 (0:00:01.265) 0:07:35.696 **** 2025-09-04 00:33:37.165328 | orchestrator | ok: [testbed-manager] 2025-09-04 00:33:37.165339 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:33:37.165349 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:33:37.165360 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:33:37.165370 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:33:37.165381 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:33:37.165391 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:33:37.165402 | orchestrator | 2025-09-04 00:33:37.165412 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-04 00:33:37.165429 | orchestrator | Thursday 04 September 2025 00:33:37 +0000 (0:00:01.669) 0:07:37.365 **** 2025-09-04 00:34:00.302960 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:34:00.303119 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:34:00.303138 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:34:00.303150 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:34:00.303161 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:34:00.303172 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:34:00.303183 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:34:00.303221 | orchestrator | 2025-09-04 00:34:00.303234 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-04 00:34:00.303246 | orchestrator | Thursday 04 September 2025 00:33:37 +0000 (0:00:00.460) 0:07:37.826 **** 2025-09-04 00:34:00.303258 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:00.303270 | orchestrator | 2025-09-04 00:34:00.303281 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-04 00:34:00.303292 | orchestrator | Thursday 04 September 2025 00:33:38 +0000 (0:00:00.932) 0:07:38.758 **** 2025-09-04 00:34:00.303305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:00.303319 | orchestrator | 2025-09-04 00:34:00.303330 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-04 00:34:00.303341 | orchestrator | Thursday 04 September 2025 00:33:39 +0000 (0:00:00.816) 0:07:39.575 **** 2025-09-04 00:34:00.303351 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.303362 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.303373 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.303384 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.303394 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.303405 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.303415 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.303426 | orchestrator | 2025-09-04 00:34:00.303437 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-04 00:34:00.303448 | orchestrator | Thursday 04 September 2025 00:33:47 +0000 (0:00:08.596) 0:07:48.171 **** 2025-09-04 00:34:00.303459 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.303470 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.303480 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.303493 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.303505 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.303518 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.303531 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.303543 | orchestrator | 2025-09-04 00:34:00.303556 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-04 00:34:00.303569 | orchestrator | Thursday 04 September 2025 00:33:48 +0000 (0:00:00.807) 0:07:48.979 **** 2025-09-04 00:34:00.303581 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.303594 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.303606 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.303618 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.303631 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.303643 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.303656 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.303668 | orchestrator | 2025-09-04 00:34:00.303681 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-04 00:34:00.303695 | orchestrator | Thursday 04 September 2025 00:33:50 +0000 (0:00:01.548) 0:07:50.528 **** 2025-09-04 00:34:00.303708 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.303721 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.303781 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.303795 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.303808 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.303821 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.303835 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.303847 | orchestrator | 2025-09-04 00:34:00.303858 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-04 00:34:00.303869 | orchestrator | Thursday 04 September 2025 00:33:52 +0000 (0:00:01.731) 0:07:52.260 **** 2025-09-04 00:34:00.303888 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.303899 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.303909 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.303920 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.303931 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.303941 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.303952 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.303963 | orchestrator | 2025-09-04 00:34:00.303974 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-04 00:34:00.303985 | orchestrator | Thursday 04 September 2025 00:33:53 +0000 (0:00:01.158) 0:07:53.418 **** 2025-09-04 00:34:00.303995 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.304006 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.304017 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.304048 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.304067 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.304087 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.304106 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.304126 | orchestrator | 2025-09-04 00:34:00.304146 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-04 00:34:00.304157 | orchestrator | 2025-09-04 00:34:00.304168 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-04 00:34:00.304179 | orchestrator | Thursday 04 September 2025 00:33:54 +0000 (0:00:01.296) 0:07:54.714 **** 2025-09-04 00:34:00.304190 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:00.304200 | orchestrator | 2025-09-04 00:34:00.304211 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-04 00:34:00.304239 | orchestrator | Thursday 04 September 2025 00:33:55 +0000 (0:00:00.826) 0:07:55.540 **** 2025-09-04 00:34:00.304251 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:00.304263 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:00.304274 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:00.304285 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:00.304296 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:00.304306 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:00.304317 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:00.304327 | orchestrator | 2025-09-04 00:34:00.304338 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-04 00:34:00.304349 | orchestrator | Thursday 04 September 2025 00:33:56 +0000 (0:00:00.786) 0:07:56.327 **** 2025-09-04 00:34:00.304360 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.304370 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.304381 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.304392 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.304402 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.304413 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.304423 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.304434 | orchestrator | 2025-09-04 00:34:00.304445 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-04 00:34:00.304455 | orchestrator | Thursday 04 September 2025 00:33:57 +0000 (0:00:01.301) 0:07:57.628 **** 2025-09-04 00:34:00.304466 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:00.304477 | orchestrator | 2025-09-04 00:34:00.304488 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-04 00:34:00.304498 | orchestrator | Thursday 04 September 2025 00:33:58 +0000 (0:00:00.825) 0:07:58.453 **** 2025-09-04 00:34:00.304509 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:00.304520 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:00.304530 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:00.304541 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:00.304552 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:00.304570 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:00.304581 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:00.304591 | orchestrator | 2025-09-04 00:34:00.304602 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-04 00:34:00.304613 | orchestrator | Thursday 04 September 2025 00:33:59 +0000 (0:00:00.803) 0:07:59.256 **** 2025-09-04 00:34:00.304624 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:00.304634 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:00.304645 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:00.304656 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:00.304666 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:00.304677 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:00.304687 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:00.304698 | orchestrator | 2025-09-04 00:34:00.304709 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:34:00.304721 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-04 00:34:00.304737 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-04 00:34:00.304748 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-04 00:34:00.304759 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-04 00:34:00.304770 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-04 00:34:00.304781 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-04 00:34:00.304791 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-04 00:34:00.304802 | orchestrator | 2025-09-04 00:34:00.304813 | orchestrator | 2025-09-04 00:34:00.304824 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:34:00.304834 | orchestrator | Thursday 04 September 2025 00:34:00 +0000 (0:00:01.244) 0:08:00.501 **** 2025-09-04 00:34:00.304845 | orchestrator | =============================================================================== 2025-09-04 00:34:00.304856 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.61s 2025-09-04 00:34:00.304867 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.91s 2025-09-04 00:34:00.304877 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.88s 2025-09-04 00:34:00.304888 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.88s 2025-09-04 00:34:00.304899 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.32s 2025-09-04 00:34:00.304910 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.04s 2025-09-04 00:34:00.304921 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.25s 2025-09-04 00:34:00.304931 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.36s 2025-09-04 00:34:00.304942 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.13s 2025-09-04 00:34:00.304953 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.70s 2025-09-04 00:34:00.304970 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.60s 2025-09-04 00:34:00.768019 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.27s 2025-09-04 00:34:00.768181 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.16s 2025-09-04 00:34:00.768223 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.07s 2025-09-04 00:34:00.768235 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.95s 2025-09-04 00:34:00.768246 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.53s 2025-09-04 00:34:00.768257 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.48s 2025-09-04 00:34:00.768268 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.79s 2025-09-04 00:34:00.768278 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.77s 2025-09-04 00:34:00.768289 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.70s 2025-09-04 00:34:01.063496 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-04 00:34:01.063591 | orchestrator | + osism apply network 2025-09-04 00:34:13.726004 | orchestrator | 2025-09-04 00:34:13 | INFO  | Task 1a823772-dd7d-430e-9c55-7fc83f87f896 (network) was prepared for execution. 2025-09-04 00:34:13.726169 | orchestrator | 2025-09-04 00:34:13 | INFO  | It takes a moment until task 1a823772-dd7d-430e-9c55-7fc83f87f896 (network) has been started and output is visible here. 2025-09-04 00:34:41.700786 | orchestrator | 2025-09-04 00:34:41.700881 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-04 00:34:41.700898 | orchestrator | 2025-09-04 00:34:41.700910 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-04 00:34:41.700922 | orchestrator | Thursday 04 September 2025 00:34:18 +0000 (0:00:00.274) 0:00:00.275 **** 2025-09-04 00:34:41.700933 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.700945 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.700956 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.700967 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.700978 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.700989 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.700999 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.701050 | orchestrator | 2025-09-04 00:34:41.701064 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-04 00:34:41.701075 | orchestrator | Thursday 04 September 2025 00:34:18 +0000 (0:00:00.721) 0:00:00.996 **** 2025-09-04 00:34:41.701087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:41.701100 | orchestrator | 2025-09-04 00:34:41.701111 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-04 00:34:41.701122 | orchestrator | Thursday 04 September 2025 00:34:19 +0000 (0:00:01.242) 0:00:02.238 **** 2025-09-04 00:34:41.701133 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.701143 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.701154 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.701164 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.701175 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.701185 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.701196 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.701206 | orchestrator | 2025-09-04 00:34:41.701217 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-04 00:34:41.701228 | orchestrator | Thursday 04 September 2025 00:34:21 +0000 (0:00:01.985) 0:00:04.224 **** 2025-09-04 00:34:41.701238 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.701249 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.701259 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.701270 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.701280 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.701290 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.701301 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.701314 | orchestrator | 2025-09-04 00:34:41.701328 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-04 00:34:41.701365 | orchestrator | Thursday 04 September 2025 00:34:23 +0000 (0:00:01.628) 0:00:05.852 **** 2025-09-04 00:34:41.701379 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-04 00:34:41.701392 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-04 00:34:41.701405 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-04 00:34:41.701417 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-04 00:34:41.701430 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-04 00:34:41.701442 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-04 00:34:41.701455 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-04 00:34:41.701467 | orchestrator | 2025-09-04 00:34:41.701480 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-04 00:34:41.701492 | orchestrator | Thursday 04 September 2025 00:34:24 +0000 (0:00:00.937) 0:00:06.790 **** 2025-09-04 00:34:41.701505 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:34:41.701519 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 00:34:41.701531 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-04 00:34:41.701544 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 00:34:41.701556 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-04 00:34:41.701569 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-04 00:34:41.701581 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-04 00:34:41.701593 | orchestrator | 2025-09-04 00:34:41.701605 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-04 00:34:41.701618 | orchestrator | Thursday 04 September 2025 00:34:27 +0000 (0:00:03.240) 0:00:10.030 **** 2025-09-04 00:34:41.701631 | orchestrator | changed: [testbed-manager] 2025-09-04 00:34:41.701644 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:41.701656 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:41.701667 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:41.701678 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:41.701688 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:41.701699 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:41.701709 | orchestrator | 2025-09-04 00:34:41.701720 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-04 00:34:41.701730 | orchestrator | Thursday 04 September 2025 00:34:29 +0000 (0:00:01.385) 0:00:11.416 **** 2025-09-04 00:34:41.701741 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:34:41.701751 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 00:34:41.701761 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-04 00:34:41.701772 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-04 00:34:41.701782 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 00:34:41.701793 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-04 00:34:41.701803 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-04 00:34:41.701813 | orchestrator | 2025-09-04 00:34:41.701824 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-04 00:34:41.701835 | orchestrator | Thursday 04 September 2025 00:34:31 +0000 (0:00:01.894) 0:00:13.310 **** 2025-09-04 00:34:41.701845 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.701856 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.701866 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.701877 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.701887 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.701897 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.701908 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.701918 | orchestrator | 2025-09-04 00:34:41.701929 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-04 00:34:41.701955 | orchestrator | Thursday 04 September 2025 00:34:32 +0000 (0:00:01.112) 0:00:14.422 **** 2025-09-04 00:34:41.701967 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:34:41.701977 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:34:41.701988 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:34:41.702113 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:34:41.702129 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:34:41.702140 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:34:41.702151 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:34:41.702161 | orchestrator | 2025-09-04 00:34:41.702172 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-04 00:34:41.702183 | orchestrator | Thursday 04 September 2025 00:34:32 +0000 (0:00:00.620) 0:00:15.043 **** 2025-09-04 00:34:41.702194 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.702204 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.702215 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.702225 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.702236 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.702246 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.702257 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.702267 | orchestrator | 2025-09-04 00:34:41.702278 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-04 00:34:41.702288 | orchestrator | Thursday 04 September 2025 00:34:34 +0000 (0:00:02.209) 0:00:17.253 **** 2025-09-04 00:34:41.702299 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:34:41.702310 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:34:41.702320 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:34:41.702331 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:34:41.702341 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:34:41.702365 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:34:41.702376 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-04 00:34:41.702388 | orchestrator | 2025-09-04 00:34:41.702398 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-04 00:34:41.702409 | orchestrator | Thursday 04 September 2025 00:34:35 +0000 (0:00:00.930) 0:00:18.183 **** 2025-09-04 00:34:41.702420 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.702430 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:34:41.702440 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:34:41.702451 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:34:41.702461 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:34:41.702472 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:34:41.702482 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:34:41.702493 | orchestrator | 2025-09-04 00:34:41.702503 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-04 00:34:41.702514 | orchestrator | Thursday 04 September 2025 00:34:37 +0000 (0:00:01.631) 0:00:19.815 **** 2025-09-04 00:34:41.702525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:41.702536 | orchestrator | 2025-09-04 00:34:41.702547 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-04 00:34:41.702558 | orchestrator | Thursday 04 September 2025 00:34:38 +0000 (0:00:01.222) 0:00:21.038 **** 2025-09-04 00:34:41.702568 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.702578 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.702589 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.702599 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.702610 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.702620 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.702631 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.702641 | orchestrator | 2025-09-04 00:34:41.702652 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-04 00:34:41.702662 | orchestrator | Thursday 04 September 2025 00:34:39 +0000 (0:00:00.946) 0:00:21.984 **** 2025-09-04 00:34:41.702673 | orchestrator | ok: [testbed-manager] 2025-09-04 00:34:41.702683 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:34:41.702694 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:34:41.702710 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:34:41.702721 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:34:41.702731 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:34:41.702742 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:34:41.702752 | orchestrator | 2025-09-04 00:34:41.702763 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-04 00:34:41.702774 | orchestrator | Thursday 04 September 2025 00:34:40 +0000 (0:00:00.792) 0:00:22.776 **** 2025-09-04 00:34:41.702784 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702795 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702805 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702816 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702826 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702837 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702847 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702858 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702868 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-04 00:34:41.702879 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702889 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702899 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702910 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702921 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-04 00:34:41.702931 | orchestrator | 2025-09-04 00:34:41.702951 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-04 00:34:59.214455 | orchestrator | Thursday 04 September 2025 00:34:41 +0000 (0:00:01.156) 0:00:23.932 **** 2025-09-04 00:34:59.214600 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:34:59.214618 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:34:59.214629 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:34:59.214640 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:34:59.214651 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:34:59.214663 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:34:59.214674 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:34:59.214685 | orchestrator | 2025-09-04 00:34:59.214697 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-04 00:34:59.214708 | orchestrator | Thursday 04 September 2025 00:34:42 +0000 (0:00:00.663) 0:00:24.596 **** 2025-09-04 00:34:59.214722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:34:59.214736 | orchestrator | 2025-09-04 00:34:59.214747 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-04 00:34:59.214758 | orchestrator | Thursday 04 September 2025 00:34:46 +0000 (0:00:04.612) 0:00:29.209 **** 2025-09-04 00:34:59.214789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214802 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214872 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.214941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.214994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215031 | orchestrator | 2025-09-04 00:34:59.215043 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-04 00:34:59.215053 | orchestrator | Thursday 04 September 2025 00:34:53 +0000 (0:00:06.153) 0:00:35.362 **** 2025-09-04 00:34:59.215065 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215096 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-04 00:34:59.215163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:34:59.215221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:35:05.508054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-04 00:35:05.508152 | orchestrator | 2025-09-04 00:35:05.508187 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-04 00:35:05.508201 | orchestrator | Thursday 04 September 2025 00:34:59 +0000 (0:00:06.087) 0:00:41.450 **** 2025-09-04 00:35:05.508231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:35:05.508243 | orchestrator | 2025-09-04 00:35:05.508254 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-04 00:35:05.508265 | orchestrator | Thursday 04 September 2025 00:35:00 +0000 (0:00:01.298) 0:00:42.749 **** 2025-09-04 00:35:05.508280 | orchestrator | ok: [testbed-manager] 2025-09-04 00:35:05.508292 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:35:05.508303 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:35:05.508313 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:35:05.508324 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:35:05.508335 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:35:05.508345 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:35:05.508356 | orchestrator | 2025-09-04 00:35:05.508367 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-04 00:35:05.508378 | orchestrator | Thursday 04 September 2025 00:35:01 +0000 (0:00:01.151) 0:00:43.901 **** 2025-09-04 00:35:05.508389 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508400 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508411 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508422 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508432 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:35:05.508444 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508455 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508465 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508476 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508487 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508497 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508508 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508519 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508529 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:35:05.508540 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508551 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508562 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508572 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:35:05.508583 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508594 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508604 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508615 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508626 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508636 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:35:05.508647 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508658 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508675 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508686 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508697 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:35:05.508708 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:35:05.508719 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-04 00:35:05.508729 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-04 00:35:05.508740 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-04 00:35:05.508751 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-04 00:35:05.508762 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:35:05.508772 | orchestrator | 2025-09-04 00:35:05.508783 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-04 00:35:05.508810 | orchestrator | Thursday 04 September 2025 00:35:03 +0000 (0:00:02.147) 0:00:46.049 **** 2025-09-04 00:35:05.508822 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:35:05.508833 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:35:05.508844 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:35:05.508854 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:35:05.508865 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:35:05.508876 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:35:05.508886 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:35:05.508897 | orchestrator | 2025-09-04 00:35:05.508908 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-04 00:35:05.508918 | orchestrator | Thursday 04 September 2025 00:35:04 +0000 (0:00:00.636) 0:00:46.685 **** 2025-09-04 00:35:05.508929 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:35:05.508940 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:35:05.508951 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:35:05.508961 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:35:05.508972 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:35:05.508982 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:35:05.508993 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:35:05.509024 | orchestrator | 2025-09-04 00:35:05.509035 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:35:05.509051 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 00:35:05.509063 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509074 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509085 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509096 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509107 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509118 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 00:35:05.509128 | orchestrator | 2025-09-04 00:35:05.509139 | orchestrator | 2025-09-04 00:35:05.509151 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:35:05.509162 | orchestrator | Thursday 04 September 2025 00:35:05 +0000 (0:00:00.690) 0:00:47.376 **** 2025-09-04 00:35:05.509179 | orchestrator | =============================================================================== 2025-09-04 00:35:05.509190 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.15s 2025-09-04 00:35:05.509201 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.09s 2025-09-04 00:35:05.509212 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.61s 2025-09-04 00:35:05.509223 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.24s 2025-09-04 00:35:05.509233 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.21s 2025-09-04 00:35:05.509244 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.15s 2025-09-04 00:35:05.509255 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2025-09-04 00:35:05.509266 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2025-09-04 00:35:05.509277 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2025-09-04 00:35:05.509287 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2025-09-04 00:35:05.509298 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.39s 2025-09-04 00:35:05.509309 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.30s 2025-09-04 00:35:05.509320 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-09-04 00:35:05.509331 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2025-09-04 00:35:05.509342 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.16s 2025-09-04 00:35:05.509352 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2025-09-04 00:35:05.509363 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-09-04 00:35:05.509374 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-09-04 00:35:05.509385 | orchestrator | osism.commons.network : Create required directories --------------------- 0.94s 2025-09-04 00:35:05.509395 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.93s 2025-09-04 00:35:05.802192 | orchestrator | + osism apply wireguard 2025-09-04 00:35:17.892275 | orchestrator | 2025-09-04 00:35:17 | INFO  | Task b011ed90-2e39-4e81-8b45-ddcc18b8b29f (wireguard) was prepared for execution. 2025-09-04 00:35:17.892397 | orchestrator | 2025-09-04 00:35:17 | INFO  | It takes a moment until task b011ed90-2e39-4e81-8b45-ddcc18b8b29f (wireguard) has been started and output is visible here. 2025-09-04 00:35:37.639811 | orchestrator | 2025-09-04 00:35:37.639920 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-04 00:35:37.639938 | orchestrator | 2025-09-04 00:35:37.639951 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-04 00:35:37.639963 | orchestrator | Thursday 04 September 2025 00:35:21 +0000 (0:00:00.228) 0:00:00.228 **** 2025-09-04 00:35:37.639975 | orchestrator | ok: [testbed-manager] 2025-09-04 00:35:37.640014 | orchestrator | 2025-09-04 00:35:37.640026 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-04 00:35:37.640037 | orchestrator | Thursday 04 September 2025 00:35:23 +0000 (0:00:01.502) 0:00:01.730 **** 2025-09-04 00:35:37.640049 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640061 | orchestrator | 2025-09-04 00:35:37.640073 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-04 00:35:37.640084 | orchestrator | Thursday 04 September 2025 00:35:30 +0000 (0:00:06.510) 0:00:08.241 **** 2025-09-04 00:35:37.640095 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640107 | orchestrator | 2025-09-04 00:35:37.640118 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-04 00:35:37.640144 | orchestrator | Thursday 04 September 2025 00:35:30 +0000 (0:00:00.570) 0:00:08.811 **** 2025-09-04 00:35:37.640155 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640189 | orchestrator | 2025-09-04 00:35:37.640202 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-04 00:35:37.640213 | orchestrator | Thursday 04 September 2025 00:35:31 +0000 (0:00:00.443) 0:00:09.254 **** 2025-09-04 00:35:37.640224 | orchestrator | ok: [testbed-manager] 2025-09-04 00:35:37.640235 | orchestrator | 2025-09-04 00:35:37.640246 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-04 00:35:37.640258 | orchestrator | Thursday 04 September 2025 00:35:31 +0000 (0:00:00.533) 0:00:09.787 **** 2025-09-04 00:35:37.640269 | orchestrator | ok: [testbed-manager] 2025-09-04 00:35:37.640281 | orchestrator | 2025-09-04 00:35:37.640292 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-04 00:35:37.640304 | orchestrator | Thursday 04 September 2025 00:35:32 +0000 (0:00:00.533) 0:00:10.320 **** 2025-09-04 00:35:37.640315 | orchestrator | ok: [testbed-manager] 2025-09-04 00:35:37.640326 | orchestrator | 2025-09-04 00:35:37.640338 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-04 00:35:37.640350 | orchestrator | Thursday 04 September 2025 00:35:32 +0000 (0:00:00.416) 0:00:10.737 **** 2025-09-04 00:35:37.640363 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640376 | orchestrator | 2025-09-04 00:35:37.640389 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-04 00:35:37.640402 | orchestrator | Thursday 04 September 2025 00:35:33 +0000 (0:00:01.139) 0:00:11.877 **** 2025-09-04 00:35:37.640416 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-04 00:35:37.640430 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640443 | orchestrator | 2025-09-04 00:35:37.640457 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-04 00:35:37.640471 | orchestrator | Thursday 04 September 2025 00:35:34 +0000 (0:00:00.925) 0:00:12.803 **** 2025-09-04 00:35:37.640484 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640497 | orchestrator | 2025-09-04 00:35:37.640510 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-04 00:35:37.640524 | orchestrator | Thursday 04 September 2025 00:35:36 +0000 (0:00:01.785) 0:00:14.588 **** 2025-09-04 00:35:37.640537 | orchestrator | changed: [testbed-manager] 2025-09-04 00:35:37.640548 | orchestrator | 2025-09-04 00:35:37.640560 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:35:37.640574 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:35:37.640587 | orchestrator | 2025-09-04 00:35:37.640599 | orchestrator | 2025-09-04 00:35:37.640613 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:35:37.640626 | orchestrator | Thursday 04 September 2025 00:35:37 +0000 (0:00:00.940) 0:00:15.528 **** 2025-09-04 00:35:37.640639 | orchestrator | =============================================================================== 2025-09-04 00:35:37.640651 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.51s 2025-09-04 00:35:37.640664 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2025-09-04 00:35:37.640677 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.50s 2025-09-04 00:35:37.640690 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2025-09-04 00:35:37.640703 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-09-04 00:35:37.640713 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2025-09-04 00:35:37.640724 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-09-04 00:35:37.640735 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-04 00:35:37.640745 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-09-04 00:35:37.640756 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-09-04 00:35:37.640774 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-09-04 00:35:37.926832 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-04 00:35:37.969896 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-04 00:35:37.969939 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-04 00:35:38.053299 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 179 0 --:--:-- --:--:-- --:--:-- 180 2025-09-04 00:35:38.067281 | orchestrator | + osism apply --environment custom workarounds 2025-09-04 00:35:39.945658 | orchestrator | 2025-09-04 00:35:39 | INFO  | Trying to run play workarounds in environment custom 2025-09-04 00:35:50.167472 | orchestrator | 2025-09-04 00:35:50 | INFO  | Task 1d827555-6c91-4725-85fc-303d692f79b9 (workarounds) was prepared for execution. 2025-09-04 00:35:50.167570 | orchestrator | 2025-09-04 00:35:50 | INFO  | It takes a moment until task 1d827555-6c91-4725-85fc-303d692f79b9 (workarounds) has been started and output is visible here. 2025-09-04 00:36:14.916521 | orchestrator | 2025-09-04 00:36:14.916645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:36:14.916663 | orchestrator | 2025-09-04 00:36:14.916675 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-04 00:36:14.916686 | orchestrator | Thursday 04 September 2025 00:35:54 +0000 (0:00:00.148) 0:00:00.148 **** 2025-09-04 00:36:14.916698 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916724 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916736 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916747 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916757 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916768 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916779 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-04 00:36:14.916789 | orchestrator | 2025-09-04 00:36:14.916800 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-04 00:36:14.916811 | orchestrator | 2025-09-04 00:36:14.916821 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-04 00:36:14.916832 | orchestrator | Thursday 04 September 2025 00:35:54 +0000 (0:00:00.773) 0:00:00.921 **** 2025-09-04 00:36:14.916843 | orchestrator | ok: [testbed-manager] 2025-09-04 00:36:14.916856 | orchestrator | 2025-09-04 00:36:14.916867 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-04 00:36:14.916878 | orchestrator | 2025-09-04 00:36:14.916889 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-04 00:36:14.916899 | orchestrator | Thursday 04 September 2025 00:35:57 +0000 (0:00:02.454) 0:00:03.375 **** 2025-09-04 00:36:14.916910 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:36:14.916921 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:36:14.916932 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:36:14.916942 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:36:14.916953 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:36:14.916964 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:36:14.917025 | orchestrator | 2025-09-04 00:36:14.917037 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-04 00:36:14.917048 | orchestrator | 2025-09-04 00:36:14.917061 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-04 00:36:14.917074 | orchestrator | Thursday 04 September 2025 00:35:59 +0000 (0:00:01.831) 0:00:05.207 **** 2025-09-04 00:36:14.917088 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917125 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917138 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917150 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917163 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917176 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-04 00:36:14.917188 | orchestrator | 2025-09-04 00:36:14.917201 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-04 00:36:14.917213 | orchestrator | Thursday 04 September 2025 00:36:00 +0000 (0:00:01.438) 0:00:06.646 **** 2025-09-04 00:36:14.917226 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:36:14.917239 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:36:14.917250 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:36:14.917263 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:36:14.917275 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:36:14.917288 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:36:14.917300 | orchestrator | 2025-09-04 00:36:14.917312 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-04 00:36:14.917326 | orchestrator | Thursday 04 September 2025 00:36:04 +0000 (0:00:03.786) 0:00:10.432 **** 2025-09-04 00:36:14.917338 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:36:14.917351 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:36:14.917363 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:36:14.917376 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:36:14.917388 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:36:14.917401 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:36:14.917414 | orchestrator | 2025-09-04 00:36:14.917425 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-04 00:36:14.917436 | orchestrator | 2025-09-04 00:36:14.917446 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-04 00:36:14.917457 | orchestrator | Thursday 04 September 2025 00:36:05 +0000 (0:00:00.688) 0:00:11.121 **** 2025-09-04 00:36:14.917468 | orchestrator | changed: [testbed-manager] 2025-09-04 00:36:14.917479 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:36:14.917489 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:36:14.917500 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:36:14.917511 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:36:14.917521 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:36:14.917532 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:36:14.917543 | orchestrator | 2025-09-04 00:36:14.917553 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-04 00:36:14.917564 | orchestrator | Thursday 04 September 2025 00:36:06 +0000 (0:00:01.635) 0:00:12.756 **** 2025-09-04 00:36:14.917575 | orchestrator | changed: [testbed-manager] 2025-09-04 00:36:14.917585 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:36:14.917596 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:36:14.917606 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:36:14.917617 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:36:14.917628 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:36:14.917656 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:36:14.917667 | orchestrator | 2025-09-04 00:36:14.917678 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-04 00:36:14.917689 | orchestrator | Thursday 04 September 2025 00:36:08 +0000 (0:00:01.626) 0:00:14.382 **** 2025-09-04 00:36:14.917700 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:36:14.917711 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:36:14.917721 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:36:14.917732 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:36:14.917742 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:36:14.917761 | orchestrator | ok: [testbed-manager] 2025-09-04 00:36:14.917778 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:36:14.917789 | orchestrator | 2025-09-04 00:36:14.917800 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-04 00:36:14.917810 | orchestrator | Thursday 04 September 2025 00:36:09 +0000 (0:00:01.521) 0:00:15.904 **** 2025-09-04 00:36:14.917821 | orchestrator | changed: [testbed-manager] 2025-09-04 00:36:14.917832 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:36:14.917842 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:36:14.917853 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:36:14.917864 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:36:14.917874 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:36:14.917885 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:36:14.917895 | orchestrator | 2025-09-04 00:36:14.917906 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-04 00:36:14.917917 | orchestrator | Thursday 04 September 2025 00:36:11 +0000 (0:00:01.722) 0:00:17.626 **** 2025-09-04 00:36:14.917927 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:36:14.917938 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:36:14.917948 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:36:14.917959 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:36:14.918006 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:36:14.918063 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:36:14.918077 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:36:14.918088 | orchestrator | 2025-09-04 00:36:14.918099 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-04 00:36:14.918109 | orchestrator | 2025-09-04 00:36:14.918120 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-04 00:36:14.918131 | orchestrator | Thursday 04 September 2025 00:36:12 +0000 (0:00:00.631) 0:00:18.258 **** 2025-09-04 00:36:14.918142 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:36:14.918153 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:36:14.918163 | orchestrator | ok: [testbed-manager] 2025-09-04 00:36:14.918174 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:36:14.918184 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:36:14.918195 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:36:14.918206 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:36:14.918216 | orchestrator | 2025-09-04 00:36:14.918227 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:36:14.918239 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:36:14.918252 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918262 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918273 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918284 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918295 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918306 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:14.918316 | orchestrator | 2025-09-04 00:36:14.918327 | orchestrator | 2025-09-04 00:36:14.918338 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:36:14.918349 | orchestrator | Thursday 04 September 2025 00:36:14 +0000 (0:00:02.618) 0:00:20.876 **** 2025-09-04 00:36:14.918368 | orchestrator | =============================================================================== 2025-09-04 00:36:14.918379 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2025-09-04 00:36:14.918390 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-09-04 00:36:14.918400 | orchestrator | Apply netplan configuration --------------------------------------------- 2.45s 2025-09-04 00:36:14.918411 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-09-04 00:36:14.918421 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-09-04 00:36:14.918432 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-09-04 00:36:14.918442 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-09-04 00:36:14.918453 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2025-09-04 00:36:14.918464 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2025-09-04 00:36:14.918475 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-09-04 00:36:14.918485 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-09-04 00:36:14.918504 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-04 00:36:15.577354 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-04 00:36:27.620552 | orchestrator | 2025-09-04 00:36:27 | INFO  | Task ee7337e0-17ea-49c5-8a1e-a081728c108f (reboot) was prepared for execution. 2025-09-04 00:36:27.620684 | orchestrator | 2025-09-04 00:36:27 | INFO  | It takes a moment until task ee7337e0-17ea-49c5-8a1e-a081728c108f (reboot) has been started and output is visible here. 2025-09-04 00:36:37.464946 | orchestrator | 2025-09-04 00:36:37.465126 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.465154 | orchestrator | 2025-09-04 00:36:37.465171 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.465188 | orchestrator | Thursday 04 September 2025 00:36:31 +0000 (0:00:00.212) 0:00:00.212 **** 2025-09-04 00:36:37.465204 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:36:37.465222 | orchestrator | 2025-09-04 00:36:37.465239 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.465255 | orchestrator | Thursday 04 September 2025 00:36:31 +0000 (0:00:00.097) 0:00:00.309 **** 2025-09-04 00:36:37.465271 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:36:37.465287 | orchestrator | 2025-09-04 00:36:37.465302 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.465319 | orchestrator | Thursday 04 September 2025 00:36:32 +0000 (0:00:00.942) 0:00:01.251 **** 2025-09-04 00:36:37.465335 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:36:37.465352 | orchestrator | 2025-09-04 00:36:37.465371 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.465389 | orchestrator | 2025-09-04 00:36:37.465406 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.465425 | orchestrator | Thursday 04 September 2025 00:36:32 +0000 (0:00:00.115) 0:00:01.367 **** 2025-09-04 00:36:37.465442 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:36:37.465458 | orchestrator | 2025-09-04 00:36:37.465478 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.465495 | orchestrator | Thursday 04 September 2025 00:36:32 +0000 (0:00:00.104) 0:00:01.471 **** 2025-09-04 00:36:37.465512 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:36:37.465531 | orchestrator | 2025-09-04 00:36:37.465548 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.465564 | orchestrator | Thursday 04 September 2025 00:36:33 +0000 (0:00:00.669) 0:00:02.140 **** 2025-09-04 00:36:37.465580 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:36:37.465632 | orchestrator | 2025-09-04 00:36:37.465651 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.465667 | orchestrator | 2025-09-04 00:36:37.465686 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.465704 | orchestrator | Thursday 04 September 2025 00:36:33 +0000 (0:00:00.106) 0:00:02.246 **** 2025-09-04 00:36:37.465722 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:36:37.465741 | orchestrator | 2025-09-04 00:36:37.465758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.465777 | orchestrator | Thursday 04 September 2025 00:36:33 +0000 (0:00:00.195) 0:00:02.442 **** 2025-09-04 00:36:37.465795 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:36:37.465813 | orchestrator | 2025-09-04 00:36:37.465831 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.465850 | orchestrator | Thursday 04 September 2025 00:36:34 +0000 (0:00:00.659) 0:00:03.102 **** 2025-09-04 00:36:37.465867 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:36:37.465882 | orchestrator | 2025-09-04 00:36:37.465899 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.465917 | orchestrator | 2025-09-04 00:36:37.465933 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.465951 | orchestrator | Thursday 04 September 2025 00:36:34 +0000 (0:00:00.115) 0:00:03.217 **** 2025-09-04 00:36:37.466011 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:36:37.466097 | orchestrator | 2025-09-04 00:36:37.466115 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.466130 | orchestrator | Thursday 04 September 2025 00:36:34 +0000 (0:00:00.098) 0:00:03.315 **** 2025-09-04 00:36:37.466146 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:36:37.466162 | orchestrator | 2025-09-04 00:36:37.466179 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.466194 | orchestrator | Thursday 04 September 2025 00:36:35 +0000 (0:00:00.688) 0:00:04.004 **** 2025-09-04 00:36:37.466210 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:36:37.466225 | orchestrator | 2025-09-04 00:36:37.466241 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.466257 | orchestrator | 2025-09-04 00:36:37.466274 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.466290 | orchestrator | Thursday 04 September 2025 00:36:35 +0000 (0:00:00.111) 0:00:04.116 **** 2025-09-04 00:36:37.466305 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:36:37.466321 | orchestrator | 2025-09-04 00:36:37.466338 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.466354 | orchestrator | Thursday 04 September 2025 00:36:35 +0000 (0:00:00.104) 0:00:04.220 **** 2025-09-04 00:36:37.466369 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:36:37.466385 | orchestrator | 2025-09-04 00:36:37.466402 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.466419 | orchestrator | Thursday 04 September 2025 00:36:36 +0000 (0:00:00.670) 0:00:04.891 **** 2025-09-04 00:36:37.466435 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:36:37.466452 | orchestrator | 2025-09-04 00:36:37.466468 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-04 00:36:37.466484 | orchestrator | 2025-09-04 00:36:37.466503 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-04 00:36:37.466526 | orchestrator | Thursday 04 September 2025 00:36:36 +0000 (0:00:00.118) 0:00:05.009 **** 2025-09-04 00:36:37.466542 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:36:37.466559 | orchestrator | 2025-09-04 00:36:37.466575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-04 00:36:37.466590 | orchestrator | Thursday 04 September 2025 00:36:36 +0000 (0:00:00.094) 0:00:05.103 **** 2025-09-04 00:36:37.466615 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:36:37.466633 | orchestrator | 2025-09-04 00:36:37.466663 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-04 00:36:37.466680 | orchestrator | Thursday 04 September 2025 00:36:37 +0000 (0:00:00.642) 0:00:05.745 **** 2025-09-04 00:36:37.466719 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:36:37.466735 | orchestrator | 2025-09-04 00:36:37.466752 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:36:37.466770 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466788 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466804 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466820 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466836 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466853 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:36:37.466869 | orchestrator | 2025-09-04 00:36:37.466885 | orchestrator | 2025-09-04 00:36:37.466901 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:36:37.466916 | orchestrator | Thursday 04 September 2025 00:36:37 +0000 (0:00:00.035) 0:00:05.781 **** 2025-09-04 00:36:37.466932 | orchestrator | =============================================================================== 2025-09-04 00:36:37.466954 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2025-09-04 00:36:37.466997 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2025-09-04 00:36:37.467013 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-09-04 00:36:37.750889 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-04 00:36:49.831489 | orchestrator | 2025-09-04 00:36:49 | INFO  | Task 4b991009-731d-4635-954a-a0c5cdb71606 (wait-for-connection) was prepared for execution. 2025-09-04 00:36:49.831601 | orchestrator | 2025-09-04 00:36:49 | INFO  | It takes a moment until task 4b991009-731d-4635-954a-a0c5cdb71606 (wait-for-connection) has been started and output is visible here. 2025-09-04 00:37:05.905040 | orchestrator | 2025-09-04 00:37:05.905147 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-04 00:37:05.905165 | orchestrator | 2025-09-04 00:37:05.905178 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-04 00:37:05.905190 | orchestrator | Thursday 04 September 2025 00:36:53 +0000 (0:00:00.234) 0:00:00.234 **** 2025-09-04 00:37:05.905201 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:37:05.905213 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:37:05.905223 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:37:05.905234 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:37:05.905244 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:37:05.905255 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:37:05.905265 | orchestrator | 2025-09-04 00:37:05.905277 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:37:05.905288 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905300 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905311 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905346 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905358 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905368 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:05.905379 | orchestrator | 2025-09-04 00:37:05.905390 | orchestrator | 2025-09-04 00:37:05.905400 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:37:05.905411 | orchestrator | Thursday 04 September 2025 00:37:05 +0000 (0:00:11.649) 0:00:11.883 **** 2025-09-04 00:37:05.905422 | orchestrator | =============================================================================== 2025-09-04 00:37:05.905432 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2025-09-04 00:37:06.196712 | orchestrator | + osism apply hddtemp 2025-09-04 00:37:18.275667 | orchestrator | 2025-09-04 00:37:18 | INFO  | Task d5cf9c31-b54b-49d3-b1fe-12e5a77da619 (hddtemp) was prepared for execution. 2025-09-04 00:37:18.275756 | orchestrator | 2025-09-04 00:37:18 | INFO  | It takes a moment until task d5cf9c31-b54b-49d3-b1fe-12e5a77da619 (hddtemp) has been started and output is visible here. 2025-09-04 00:37:45.863603 | orchestrator | 2025-09-04 00:37:45.863723 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-04 00:37:45.863759 | orchestrator | 2025-09-04 00:37:45.863773 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-04 00:37:45.863785 | orchestrator | Thursday 04 September 2025 00:37:22 +0000 (0:00:00.274) 0:00:00.274 **** 2025-09-04 00:37:45.863796 | orchestrator | ok: [testbed-manager] 2025-09-04 00:37:45.863808 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:37:45.863819 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:37:45.863829 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:37:45.863840 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:37:45.863850 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:37:45.863860 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:37:45.863871 | orchestrator | 2025-09-04 00:37:45.863882 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-04 00:37:45.863892 | orchestrator | Thursday 04 September 2025 00:37:23 +0000 (0:00:00.679) 0:00:00.954 **** 2025-09-04 00:37:45.863905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:37:45.863918 | orchestrator | 2025-09-04 00:37:45.863929 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-04 00:37:45.863986 | orchestrator | Thursday 04 September 2025 00:37:24 +0000 (0:00:01.194) 0:00:02.149 **** 2025-09-04 00:37:45.863998 | orchestrator | ok: [testbed-manager] 2025-09-04 00:37:45.864009 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:37:45.864019 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:37:45.864030 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:37:45.864041 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:37:45.864052 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:37:45.864062 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:37:45.864073 | orchestrator | 2025-09-04 00:37:45.864083 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-04 00:37:45.864094 | orchestrator | Thursday 04 September 2025 00:37:26 +0000 (0:00:01.939) 0:00:04.089 **** 2025-09-04 00:37:45.864105 | orchestrator | changed: [testbed-manager] 2025-09-04 00:37:45.864116 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:37:45.864127 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:37:45.864138 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:37:45.864149 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:37:45.864179 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:37:45.864191 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:37:45.864201 | orchestrator | 2025-09-04 00:37:45.864212 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-04 00:37:45.864223 | orchestrator | Thursday 04 September 2025 00:37:27 +0000 (0:00:01.120) 0:00:05.209 **** 2025-09-04 00:37:45.864233 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:37:45.864244 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:37:45.864254 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:37:45.864265 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:37:45.864275 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:37:45.864286 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:37:45.864296 | orchestrator | ok: [testbed-manager] 2025-09-04 00:37:45.864307 | orchestrator | 2025-09-04 00:37:45.864317 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-04 00:37:45.864328 | orchestrator | Thursday 04 September 2025 00:37:28 +0000 (0:00:01.143) 0:00:06.352 **** 2025-09-04 00:37:45.864339 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:37:45.864350 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:37:45.864361 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:37:45.864371 | orchestrator | changed: [testbed-manager] 2025-09-04 00:37:45.864382 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:37:45.864392 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:37:45.864403 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:37:45.864413 | orchestrator | 2025-09-04 00:37:45.864424 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-04 00:37:45.864435 | orchestrator | Thursday 04 September 2025 00:37:29 +0000 (0:00:00.787) 0:00:07.139 **** 2025-09-04 00:37:45.864445 | orchestrator | changed: [testbed-manager] 2025-09-04 00:37:45.864456 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:37:45.864467 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:37:45.864477 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:37:45.864487 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:37:45.864498 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:37:45.864508 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:37:45.864518 | orchestrator | 2025-09-04 00:37:45.864529 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-04 00:37:45.864540 | orchestrator | Thursday 04 September 2025 00:37:42 +0000 (0:00:12.918) 0:00:20.057 **** 2025-09-04 00:37:45.864551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:37:45.864563 | orchestrator | 2025-09-04 00:37:45.864573 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-04 00:37:45.864584 | orchestrator | Thursday 04 September 2025 00:37:43 +0000 (0:00:01.402) 0:00:21.460 **** 2025-09-04 00:37:45.864595 | orchestrator | changed: [testbed-manager] 2025-09-04 00:37:45.864605 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:37:45.864616 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:37:45.864626 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:37:45.864637 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:37:45.864647 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:37:45.864657 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:37:45.864668 | orchestrator | 2025-09-04 00:37:45.864678 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:37:45.864690 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:37:45.864718 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864736 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864755 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864766 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864776 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864787 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:37:45.864797 | orchestrator | 2025-09-04 00:37:45.864808 | orchestrator | 2025-09-04 00:37:45.864819 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:37:45.864829 | orchestrator | Thursday 04 September 2025 00:37:45 +0000 (0:00:01.843) 0:00:23.303 **** 2025-09-04 00:37:45.864840 | orchestrator | =============================================================================== 2025-09-04 00:37:45.864851 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.92s 2025-09-04 00:37:45.864861 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-09-04 00:37:45.864872 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-09-04 00:37:45.864882 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-09-04 00:37:45.864893 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.19s 2025-09-04 00:37:45.864903 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.14s 2025-09-04 00:37:45.864914 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-09-04 00:37:45.864924 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2025-09-04 00:37:45.864979 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2025-09-04 00:37:46.157871 | orchestrator | ++ semver latest 7.1.1 2025-09-04 00:37:46.209975 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-04 00:37:46.210090 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-04 00:37:46.210113 | orchestrator | + sudo systemctl restart manager.service 2025-09-04 00:38:00.058740 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-04 00:38:00.058835 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-04 00:38:00.058851 | orchestrator | + local max_attempts=60 2025-09-04 00:38:00.058863 | orchestrator | + local name=ceph-ansible 2025-09-04 00:38:00.058874 | orchestrator | + local attempt_num=1 2025-09-04 00:38:00.058886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:00.093578 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:00.093643 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:00.093656 | orchestrator | + sleep 5 2025-09-04 00:38:05.097640 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:05.130470 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:05.130517 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:05.130530 | orchestrator | + sleep 5 2025-09-04 00:38:10.134134 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:10.170297 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:10.170347 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:10.170359 | orchestrator | + sleep 5 2025-09-04 00:38:15.174435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:15.209469 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:15.209519 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:15.209531 | orchestrator | + sleep 5 2025-09-04 00:38:20.215095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:20.255439 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:20.255544 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:20.255589 | orchestrator | + sleep 5 2025-09-04 00:38:25.260009 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:25.298011 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:25.298151 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:25.298166 | orchestrator | + sleep 5 2025-09-04 00:38:30.302486 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:30.345366 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:30.345485 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:30.345500 | orchestrator | + sleep 5 2025-09-04 00:38:35.349655 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:35.386555 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:35.386669 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:35.386693 | orchestrator | + sleep 5 2025-09-04 00:38:40.389050 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:40.426098 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:40.426180 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:40.426195 | orchestrator | + sleep 5 2025-09-04 00:38:45.429376 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:45.465135 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:45.465243 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:45.465269 | orchestrator | + sleep 5 2025-09-04 00:38:50.469814 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:50.503495 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:50.503558 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:50.503572 | orchestrator | + sleep 5 2025-09-04 00:38:55.508200 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:38:55.551627 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:38:55.551716 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:38:55.551732 | orchestrator | + sleep 5 2025-09-04 00:39:00.556300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:39:00.593771 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-04 00:39:00.593848 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-04 00:39:00.593861 | orchestrator | + sleep 5 2025-09-04 00:39:05.600724 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-04 00:39:05.642987 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:39:05.643050 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-04 00:39:05.643065 | orchestrator | + local max_attempts=60 2025-09-04 00:39:05.643078 | orchestrator | + local name=kolla-ansible 2025-09-04 00:39:05.643089 | orchestrator | + local attempt_num=1 2025-09-04 00:39:05.643330 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-04 00:39:05.685316 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:39:05.685368 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-04 00:39:05.685383 | orchestrator | + local max_attempts=60 2025-09-04 00:39:05.685396 | orchestrator | + local name=osism-ansible 2025-09-04 00:39:05.685408 | orchestrator | + local attempt_num=1 2025-09-04 00:39:05.685775 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-04 00:39:05.723505 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-04 00:39:05.723561 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-04 00:39:05.723573 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-04 00:39:05.907674 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-04 00:39:06.098541 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-04 00:39:06.252221 | orchestrator | ARA in osism-ansible already disabled. 2025-09-04 00:39:06.426501 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-04 00:39:06.427030 | orchestrator | + osism apply gather-facts 2025-09-04 00:39:18.542221 | orchestrator | 2025-09-04 00:39:18 | INFO  | Task 6726f71f-7380-40a1-ade2-1b6fec0b4068 (gather-facts) was prepared for execution. 2025-09-04 00:39:18.542337 | orchestrator | 2025-09-04 00:39:18 | INFO  | It takes a moment until task 6726f71f-7380-40a1-ade2-1b6fec0b4068 (gather-facts) has been started and output is visible here. 2025-09-04 00:39:31.522414 | orchestrator | 2025-09-04 00:39:31.522528 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-04 00:39:31.522570 | orchestrator | 2025-09-04 00:39:31.522583 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:39:31.522596 | orchestrator | Thursday 04 September 2025 00:39:22 +0000 (0:00:00.200) 0:00:00.200 **** 2025-09-04 00:39:31.522607 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:39:31.522620 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:39:31.522631 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:39:31.522643 | orchestrator | ok: [testbed-manager] 2025-09-04 00:39:31.522654 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:39:31.522664 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:39:31.522675 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:39:31.522686 | orchestrator | 2025-09-04 00:39:31.522698 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-04 00:39:31.522709 | orchestrator | 2025-09-04 00:39:31.522720 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-04 00:39:31.522731 | orchestrator | Thursday 04 September 2025 00:39:30 +0000 (0:00:07.994) 0:00:08.195 **** 2025-09-04 00:39:31.522742 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:39:31.522754 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:39:31.522765 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:39:31.522776 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:39:31.522787 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:39:31.522798 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:39:31.522809 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:39:31.522819 | orchestrator | 2025-09-04 00:39:31.522831 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:39:31.522842 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522854 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522865 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522876 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522887 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522898 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522945 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:39:31.522956 | orchestrator | 2025-09-04 00:39:31.522970 | orchestrator | 2025-09-04 00:39:31.522983 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:39:31.522996 | orchestrator | Thursday 04 September 2025 00:39:31 +0000 (0:00:00.528) 0:00:08.723 **** 2025-09-04 00:39:31.523009 | orchestrator | =============================================================================== 2025-09-04 00:39:31.523022 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.99s 2025-09-04 00:39:31.523035 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-04 00:39:31.831806 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-04 00:39:31.843622 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-04 00:39:31.856931 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-04 00:39:31.875032 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-04 00:39:31.894831 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-04 00:39:31.913716 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-04 00:39:31.931462 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-04 00:39:31.950167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-04 00:39:31.963424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-04 00:39:31.976526 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-04 00:39:31.989739 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-04 00:39:32.000400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-04 00:39:32.024799 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-04 00:39:32.038707 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-04 00:39:32.046606 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-04 00:39:32.057594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-04 00:39:32.066286 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-04 00:39:32.084029 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-04 00:39:32.097404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-04 00:39:32.107286 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-04 00:39:32.116495 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-04 00:39:32.350537 | orchestrator | ok: Runtime: 0:24:14.110974 2025-09-04 00:39:32.448603 | 2025-09-04 00:39:32.448759 | TASK [Deploy services] 2025-09-04 00:39:32.980283 | orchestrator | skipping: Conditional result was False 2025-09-04 00:39:33.005969 | 2025-09-04 00:39:33.006167 | TASK [Deploy in a nutshell] 2025-09-04 00:39:33.735024 | orchestrator | + set -e 2025-09-04 00:39:33.735194 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-04 00:39:33.735218 | orchestrator | ++ export INTERACTIVE=false 2025-09-04 00:39:33.735239 | orchestrator | ++ INTERACTIVE=false 2025-09-04 00:39:33.735252 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-04 00:39:33.735265 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-04 00:39:33.735279 | orchestrator | + source /opt/manager-vars.sh 2025-09-04 00:39:33.735322 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-04 00:39:33.735351 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-04 00:39:33.735365 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-04 00:39:33.735380 | orchestrator | ++ CEPH_VERSION=reef 2025-09-04 00:39:33.735392 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-04 00:39:33.735410 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-04 00:39:33.735421 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-04 00:39:33.735441 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-04 00:39:33.735452 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-04 00:39:33.735466 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-04 00:39:33.735477 | orchestrator | ++ export ARA=false 2025-09-04 00:39:33.735488 | orchestrator | ++ ARA=false 2025-09-04 00:39:33.735499 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-04 00:39:33.735510 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-04 00:39:33.735521 | orchestrator | ++ export TEMPEST=true 2025-09-04 00:39:33.735531 | orchestrator | ++ TEMPEST=true 2025-09-04 00:39:33.735542 | orchestrator | ++ export IS_ZUUL=true 2025-09-04 00:39:33.735552 | orchestrator | ++ IS_ZUUL=true 2025-09-04 00:39:33.735563 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:39:33.735574 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.146 2025-09-04 00:39:33.735585 | orchestrator | ++ export EXTERNAL_API=false 2025-09-04 00:39:33.735596 | orchestrator | ++ EXTERNAL_API=false 2025-09-04 00:39:33.735606 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-04 00:39:33.735617 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-04 00:39:33.735638 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-04 00:39:33.735650 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-04 00:39:33.735661 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-04 00:39:33.736614 | orchestrator | 2025-09-04 00:39:33.736633 | orchestrator | # PULL IMAGES 2025-09-04 00:39:33.736646 | orchestrator | 2025-09-04 00:39:33.736666 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-04 00:39:33.736679 | orchestrator | + echo 2025-09-04 00:39:33.736692 | orchestrator | + echo '# PULL IMAGES' 2025-09-04 00:39:33.736706 | orchestrator | + echo 2025-09-04 00:39:33.736990 | orchestrator | ++ semver latest 7.0.0 2025-09-04 00:39:33.795361 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-04 00:39:33.795396 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-04 00:39:33.795408 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-04 00:39:35.671028 | orchestrator | 2025-09-04 00:39:35 | INFO  | Trying to run play pull-images in environment custom 2025-09-04 00:39:45.851506 | orchestrator | 2025-09-04 00:39:45 | INFO  | Task 4496263a-1c3e-47af-9c33-ab5cd9457077 (pull-images) was prepared for execution. 2025-09-04 00:39:45.851611 | orchestrator | 2025-09-04 00:39:45 | INFO  | Task 4496263a-1c3e-47af-9c33-ab5cd9457077 is running in background. No more output. Check ARA for logs. 2025-09-04 00:39:48.144638 | orchestrator | 2025-09-04 00:39:48 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-04 00:39:58.247577 | orchestrator | 2025-09-04 00:39:58 | INFO  | Task d34da82e-5007-487f-84bb-91797b932e4c (wipe-partitions) was prepared for execution. 2025-09-04 00:39:58.247678 | orchestrator | 2025-09-04 00:39:58 | INFO  | It takes a moment until task d34da82e-5007-487f-84bb-91797b932e4c (wipe-partitions) has been started and output is visible here. 2025-09-04 00:40:10.987377 | orchestrator | 2025-09-04 00:40:10.987438 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-04 00:40:10.987444 | orchestrator | 2025-09-04 00:40:10.987449 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-04 00:40:10.987456 | orchestrator | Thursday 04 September 2025 00:40:02 +0000 (0:00:00.196) 0:00:00.196 **** 2025-09-04 00:40:10.987461 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:40:10.987465 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:40:10.987469 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:40:10.987473 | orchestrator | 2025-09-04 00:40:10.987477 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-04 00:40:10.987492 | orchestrator | Thursday 04 September 2025 00:40:03 +0000 (0:00:00.592) 0:00:00.789 **** 2025-09-04 00:40:10.987497 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:10.987501 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:40:10.987506 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:40:10.987510 | orchestrator | 2025-09-04 00:40:10.987514 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-04 00:40:10.987518 | orchestrator | Thursday 04 September 2025 00:40:03 +0000 (0:00:00.273) 0:00:01.063 **** 2025-09-04 00:40:10.987521 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:10.987526 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:40:10.987529 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:40:10.987533 | orchestrator | 2025-09-04 00:40:10.987537 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-04 00:40:10.987541 | orchestrator | Thursday 04 September 2025 00:40:04 +0000 (0:00:00.740) 0:00:01.804 **** 2025-09-04 00:40:10.987545 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:10.987548 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:40:10.987552 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:40:10.987556 | orchestrator | 2025-09-04 00:40:10.987560 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-04 00:40:10.987564 | orchestrator | Thursday 04 September 2025 00:40:04 +0000 (0:00:00.252) 0:00:02.057 **** 2025-09-04 00:40:10.987568 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-04 00:40:10.987573 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-04 00:40:10.987577 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-04 00:40:10.987580 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-04 00:40:10.987584 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-04 00:40:10.987588 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-04 00:40:10.987592 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-04 00:40:10.987595 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-04 00:40:10.987599 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-04 00:40:10.987603 | orchestrator | 2025-09-04 00:40:10.987607 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-04 00:40:10.987611 | orchestrator | Thursday 04 September 2025 00:40:05 +0000 (0:00:01.188) 0:00:03.245 **** 2025-09-04 00:40:10.987615 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-04 00:40:10.987619 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-04 00:40:10.987622 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-04 00:40:10.987626 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-04 00:40:10.987630 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-04 00:40:10.987634 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-04 00:40:10.987637 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-04 00:40:10.987641 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-04 00:40:10.987645 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-04 00:40:10.987648 | orchestrator | 2025-09-04 00:40:10.987652 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-04 00:40:10.987656 | orchestrator | Thursday 04 September 2025 00:40:07 +0000 (0:00:01.331) 0:00:04.577 **** 2025-09-04 00:40:10.987660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-04 00:40:10.987663 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-04 00:40:10.987667 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-04 00:40:10.987671 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-04 00:40:10.987675 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-04 00:40:10.987681 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-04 00:40:10.987685 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-04 00:40:10.987691 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-04 00:40:10.987695 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-04 00:40:10.987699 | orchestrator | 2025-09-04 00:40:10.987703 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-04 00:40:10.987706 | orchestrator | Thursday 04 September 2025 00:40:09 +0000 (0:00:02.330) 0:00:06.908 **** 2025-09-04 00:40:10.987710 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:40:10.987714 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:40:10.987718 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:40:10.987721 | orchestrator | 2025-09-04 00:40:10.987725 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-04 00:40:10.987729 | orchestrator | Thursday 04 September 2025 00:40:10 +0000 (0:00:00.607) 0:00:07.516 **** 2025-09-04 00:40:10.987733 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:40:10.987737 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:40:10.987740 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:40:10.987744 | orchestrator | 2025-09-04 00:40:10.987748 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:40:10.987753 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:10.987757 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:10.987768 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:10.987772 | orchestrator | 2025-09-04 00:40:10.987776 | orchestrator | 2025-09-04 00:40:10.987780 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:40:10.987784 | orchestrator | Thursday 04 September 2025 00:40:10 +0000 (0:00:00.611) 0:00:08.128 **** 2025-09-04 00:40:10.987787 | orchestrator | =============================================================================== 2025-09-04 00:40:10.987791 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2025-09-04 00:40:10.987795 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-09-04 00:40:10.987799 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-09-04 00:40:10.987802 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-09-04 00:40:10.987806 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-09-04 00:40:10.987810 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-09-04 00:40:10.987813 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-09-04 00:40:10.987817 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-09-04 00:40:10.987821 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-04 00:40:23.370263 | orchestrator | 2025-09-04 00:40:23 | INFO  | Task aa68fc30-5c4d-4a48-bd21-941534204608 (facts) was prepared for execution. 2025-09-04 00:40:23.370386 | orchestrator | 2025-09-04 00:40:23 | INFO  | It takes a moment until task aa68fc30-5c4d-4a48-bd21-941534204608 (facts) has been started and output is visible here. 2025-09-04 00:40:35.754225 | orchestrator | 2025-09-04 00:40:35.754323 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-04 00:40:35.754339 | orchestrator | 2025-09-04 00:40:35.754352 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-04 00:40:35.754365 | orchestrator | Thursday 04 September 2025 00:40:27 +0000 (0:00:00.282) 0:00:00.282 **** 2025-09-04 00:40:35.754376 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:40:35.754388 | orchestrator | ok: [testbed-manager] 2025-09-04 00:40:35.754399 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:40:35.754432 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:40:35.754444 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:35.754455 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:40:35.754465 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:40:35.754476 | orchestrator | 2025-09-04 00:40:35.754489 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-04 00:40:35.754500 | orchestrator | Thursday 04 September 2025 00:40:28 +0000 (0:00:01.083) 0:00:01.365 **** 2025-09-04 00:40:35.754510 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:40:35.754522 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:40:35.754532 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:40:35.754542 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:40:35.754553 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:35.754563 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:40:35.754574 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:40:35.754585 | orchestrator | 2025-09-04 00:40:35.754596 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-04 00:40:35.754606 | orchestrator | 2025-09-04 00:40:35.754617 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:40:35.754628 | orchestrator | Thursday 04 September 2025 00:40:29 +0000 (0:00:01.247) 0:00:02.613 **** 2025-09-04 00:40:35.754638 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:40:35.754649 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:40:35.754660 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:40:35.754671 | orchestrator | ok: [testbed-manager] 2025-09-04 00:40:35.754682 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:35.754692 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:40:35.754703 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:40:35.754713 | orchestrator | 2025-09-04 00:40:35.754724 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-04 00:40:35.754734 | orchestrator | 2025-09-04 00:40:35.754745 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-04 00:40:35.754784 | orchestrator | Thursday 04 September 2025 00:40:34 +0000 (0:00:04.718) 0:00:07.332 **** 2025-09-04 00:40:35.754798 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:40:35.754823 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:40:35.754837 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:40:35.754849 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:40:35.754862 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:35.754875 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:40:35.754906 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:40:35.754920 | orchestrator | 2025-09-04 00:40:35.754933 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:40:35.754947 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.754960 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.754973 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.754986 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.754999 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.755013 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.755026 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:40:35.755039 | orchestrator | 2025-09-04 00:40:35.755062 | orchestrator | 2025-09-04 00:40:35.755075 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:40:35.755088 | orchestrator | Thursday 04 September 2025 00:40:35 +0000 (0:00:00.721) 0:00:08.054 **** 2025-09-04 00:40:35.755101 | orchestrator | =============================================================================== 2025-09-04 00:40:35.755115 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-09-04 00:40:35.755128 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-04 00:40:35.755141 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-04 00:40:35.755153 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-09-04 00:40:38.043148 | orchestrator | 2025-09-04 00:40:38 | INFO  | Task fba5b8de-1a68-43f9-923a-635d2e78ad70 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-04 00:40:38.043235 | orchestrator | 2025-09-04 00:40:38 | INFO  | It takes a moment until task fba5b8de-1a68-43f9-923a-635d2e78ad70 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-04 00:40:49.964494 | orchestrator | 2025-09-04 00:40:49.964592 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-04 00:40:49.964610 | orchestrator | 2025-09-04 00:40:49.964623 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:40:49.964638 | orchestrator | Thursday 04 September 2025 00:40:42 +0000 (0:00:00.326) 0:00:00.326 **** 2025-09-04 00:40:49.964649 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-04 00:40:49.964661 | orchestrator | 2025-09-04 00:40:49.964673 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:40:49.964685 | orchestrator | Thursday 04 September 2025 00:40:42 +0000 (0:00:00.264) 0:00:00.590 **** 2025-09-04 00:40:49.964697 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:49.964710 | orchestrator | 2025-09-04 00:40:49.964721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.964733 | orchestrator | Thursday 04 September 2025 00:40:42 +0000 (0:00:00.224) 0:00:00.815 **** 2025-09-04 00:40:49.964745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-04 00:40:49.964757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-04 00:40:49.964769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-04 00:40:49.964781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-04 00:40:49.964792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-04 00:40:49.964804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-04 00:40:49.964815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-04 00:40:49.964827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-04 00:40:49.964838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-04 00:40:49.964849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-04 00:40:49.964861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-04 00:40:49.964884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-04 00:40:49.964938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-04 00:40:49.964950 | orchestrator | 2025-09-04 00:40:49.964961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.964972 | orchestrator | Thursday 04 September 2025 00:40:43 +0000 (0:00:00.367) 0:00:01.182 **** 2025-09-04 00:40:49.964982 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965014 | orchestrator | 2025-09-04 00:40:49.965026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965039 | orchestrator | Thursday 04 September 2025 00:40:43 +0000 (0:00:00.477) 0:00:01.659 **** 2025-09-04 00:40:49.965053 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965066 | orchestrator | 2025-09-04 00:40:49.965078 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965090 | orchestrator | Thursday 04 September 2025 00:40:43 +0000 (0:00:00.237) 0:00:01.897 **** 2025-09-04 00:40:49.965102 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965114 | orchestrator | 2025-09-04 00:40:49.965126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965138 | orchestrator | Thursday 04 September 2025 00:40:44 +0000 (0:00:00.208) 0:00:02.106 **** 2025-09-04 00:40:49.965151 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965167 | orchestrator | 2025-09-04 00:40:49.965180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965192 | orchestrator | Thursday 04 September 2025 00:40:44 +0000 (0:00:00.213) 0:00:02.319 **** 2025-09-04 00:40:49.965205 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965217 | orchestrator | 2025-09-04 00:40:49.965231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965244 | orchestrator | Thursday 04 September 2025 00:40:44 +0000 (0:00:00.213) 0:00:02.533 **** 2025-09-04 00:40:49.965257 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965269 | orchestrator | 2025-09-04 00:40:49.965282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965294 | orchestrator | Thursday 04 September 2025 00:40:44 +0000 (0:00:00.193) 0:00:02.726 **** 2025-09-04 00:40:49.965306 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965319 | orchestrator | 2025-09-04 00:40:49.965331 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965343 | orchestrator | Thursday 04 September 2025 00:40:44 +0000 (0:00:00.210) 0:00:02.936 **** 2025-09-04 00:40:49.965356 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965368 | orchestrator | 2025-09-04 00:40:49.965381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965393 | orchestrator | Thursday 04 September 2025 00:40:45 +0000 (0:00:00.224) 0:00:03.160 **** 2025-09-04 00:40:49.965404 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472) 2025-09-04 00:40:49.965416 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472) 2025-09-04 00:40:49.965426 | orchestrator | 2025-09-04 00:40:49.965437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965447 | orchestrator | Thursday 04 September 2025 00:40:45 +0000 (0:00:00.417) 0:00:03.578 **** 2025-09-04 00:40:49.965474 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa) 2025-09-04 00:40:49.965486 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa) 2025-09-04 00:40:49.965496 | orchestrator | 2025-09-04 00:40:49.965507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965518 | orchestrator | Thursday 04 September 2025 00:40:45 +0000 (0:00:00.436) 0:00:04.014 **** 2025-09-04 00:40:49.965528 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc) 2025-09-04 00:40:49.965539 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc) 2025-09-04 00:40:49.965550 | orchestrator | 2025-09-04 00:40:49.965561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965571 | orchestrator | Thursday 04 September 2025 00:40:46 +0000 (0:00:00.605) 0:00:04.620 **** 2025-09-04 00:40:49.965582 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637) 2025-09-04 00:40:49.965601 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637) 2025-09-04 00:40:49.965612 | orchestrator | 2025-09-04 00:40:49.965622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:49.965633 | orchestrator | Thursday 04 September 2025 00:40:47 +0000 (0:00:00.635) 0:00:05.255 **** 2025-09-04 00:40:49.965644 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:40:49.965654 | orchestrator | 2025-09-04 00:40:49.965665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.965681 | orchestrator | Thursday 04 September 2025 00:40:47 +0000 (0:00:00.727) 0:00:05.983 **** 2025-09-04 00:40:49.965692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-04 00:40:49.965702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-04 00:40:49.965713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-04 00:40:49.965723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-04 00:40:49.965734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-04 00:40:49.965744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-04 00:40:49.965754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-04 00:40:49.965764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-04 00:40:49.965775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-04 00:40:49.965785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-04 00:40:49.965796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-04 00:40:49.965806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-04 00:40:49.965816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-04 00:40:49.965827 | orchestrator | 2025-09-04 00:40:49.965838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.965848 | orchestrator | Thursday 04 September 2025 00:40:48 +0000 (0:00:00.384) 0:00:06.367 **** 2025-09-04 00:40:49.965859 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965870 | orchestrator | 2025-09-04 00:40:49.965880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.965913 | orchestrator | Thursday 04 September 2025 00:40:48 +0000 (0:00:00.214) 0:00:06.582 **** 2025-09-04 00:40:49.965925 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965935 | orchestrator | 2025-09-04 00:40:49.965946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.965956 | orchestrator | Thursday 04 September 2025 00:40:48 +0000 (0:00:00.221) 0:00:06.803 **** 2025-09-04 00:40:49.965967 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.965977 | orchestrator | 2025-09-04 00:40:49.965988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.965998 | orchestrator | Thursday 04 September 2025 00:40:48 +0000 (0:00:00.205) 0:00:07.009 **** 2025-09-04 00:40:49.966009 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.966065 | orchestrator | 2025-09-04 00:40:49.966077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.966088 | orchestrator | Thursday 04 September 2025 00:40:49 +0000 (0:00:00.200) 0:00:07.209 **** 2025-09-04 00:40:49.966098 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.966109 | orchestrator | 2025-09-04 00:40:49.966126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.966137 | orchestrator | Thursday 04 September 2025 00:40:49 +0000 (0:00:00.210) 0:00:07.420 **** 2025-09-04 00:40:49.966148 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.966158 | orchestrator | 2025-09-04 00:40:49.966168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.966179 | orchestrator | Thursday 04 September 2025 00:40:49 +0000 (0:00:00.202) 0:00:07.622 **** 2025-09-04 00:40:49.966189 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:49.966200 | orchestrator | 2025-09-04 00:40:49.966210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:49.966221 | orchestrator | Thursday 04 September 2025 00:40:49 +0000 (0:00:00.194) 0:00:07.816 **** 2025-09-04 00:40:49.966239 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.629594 | orchestrator | 2025-09-04 00:40:57.629704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:57.629727 | orchestrator | Thursday 04 September 2025 00:40:49 +0000 (0:00:00.204) 0:00:08.021 **** 2025-09-04 00:40:57.629739 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-04 00:40:57.629751 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-04 00:40:57.629762 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-04 00:40:57.629773 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-04 00:40:57.629784 | orchestrator | 2025-09-04 00:40:57.629795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:57.629806 | orchestrator | Thursday 04 September 2025 00:40:50 +0000 (0:00:01.026) 0:00:09.048 **** 2025-09-04 00:40:57.629817 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.629828 | orchestrator | 2025-09-04 00:40:57.629839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:57.629850 | orchestrator | Thursday 04 September 2025 00:40:51 +0000 (0:00:00.219) 0:00:09.267 **** 2025-09-04 00:40:57.629860 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.629871 | orchestrator | 2025-09-04 00:40:57.629882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:57.629939 | orchestrator | Thursday 04 September 2025 00:40:51 +0000 (0:00:00.207) 0:00:09.474 **** 2025-09-04 00:40:57.629950 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.629961 | orchestrator | 2025-09-04 00:40:57.629972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:40:57.629983 | orchestrator | Thursday 04 September 2025 00:40:51 +0000 (0:00:00.205) 0:00:09.680 **** 2025-09-04 00:40:57.629994 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630005 | orchestrator | 2025-09-04 00:40:57.630069 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-04 00:40:57.630084 | orchestrator | Thursday 04 September 2025 00:40:51 +0000 (0:00:00.205) 0:00:09.885 **** 2025-09-04 00:40:57.630102 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-04 00:40:57.630123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-04 00:40:57.630145 | orchestrator | 2025-09-04 00:40:57.630167 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-04 00:40:57.630187 | orchestrator | Thursday 04 September 2025 00:40:51 +0000 (0:00:00.174) 0:00:10.060 **** 2025-09-04 00:40:57.630224 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630245 | orchestrator | 2025-09-04 00:40:57.630261 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-04 00:40:57.630275 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.140) 0:00:10.200 **** 2025-09-04 00:40:57.630294 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630310 | orchestrator | 2025-09-04 00:40:57.630326 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-04 00:40:57.630342 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.135) 0:00:10.335 **** 2025-09-04 00:40:57.630358 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630395 | orchestrator | 2025-09-04 00:40:57.630416 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-04 00:40:57.630432 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.133) 0:00:10.469 **** 2025-09-04 00:40:57.630449 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:57.630466 | orchestrator | 2025-09-04 00:40:57.630483 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-04 00:40:57.630499 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.147) 0:00:10.616 **** 2025-09-04 00:40:57.630518 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3ae8186-8e2b-5882-a4eb-fc7766e33302'}}) 2025-09-04 00:40:57.630535 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cbfcabc-5503-586a-bc65-b738adbafdd0'}}) 2025-09-04 00:40:57.630551 | orchestrator | 2025-09-04 00:40:57.630568 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-04 00:40:57.630584 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.158) 0:00:10.775 **** 2025-09-04 00:40:57.630604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3ae8186-8e2b-5882-a4eb-fc7766e33302'}})  2025-09-04 00:40:57.630630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cbfcabc-5503-586a-bc65-b738adbafdd0'}})  2025-09-04 00:40:57.630647 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630664 | orchestrator | 2025-09-04 00:40:57.630684 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-04 00:40:57.630702 | orchestrator | Thursday 04 September 2025 00:40:52 +0000 (0:00:00.167) 0:00:10.943 **** 2025-09-04 00:40:57.630720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3ae8186-8e2b-5882-a4eb-fc7766e33302'}})  2025-09-04 00:40:57.630736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cbfcabc-5503-586a-bc65-b738adbafdd0'}})  2025-09-04 00:40:57.630754 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630771 | orchestrator | 2025-09-04 00:40:57.630790 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-04 00:40:57.630809 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.339) 0:00:11.283 **** 2025-09-04 00:40:57.630828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3ae8186-8e2b-5882-a4eb-fc7766e33302'}})  2025-09-04 00:40:57.630847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cbfcabc-5503-586a-bc65-b738adbafdd0'}})  2025-09-04 00:40:57.630867 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.630878 | orchestrator | 2025-09-04 00:40:57.630947 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-04 00:40:57.630963 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.175) 0:00:11.458 **** 2025-09-04 00:40:57.630980 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:57.631001 | orchestrator | 2025-09-04 00:40:57.631019 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-04 00:40:57.631030 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.125) 0:00:11.584 **** 2025-09-04 00:40:57.631041 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:40:57.631052 | orchestrator | 2025-09-04 00:40:57.631062 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-04 00:40:57.631073 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.142) 0:00:11.727 **** 2025-09-04 00:40:57.631086 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631105 | orchestrator | 2025-09-04 00:40:57.631125 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-04 00:40:57.631138 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.145) 0:00:11.873 **** 2025-09-04 00:40:57.631149 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631159 | orchestrator | 2025-09-04 00:40:57.631179 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-04 00:40:57.631190 | orchestrator | Thursday 04 September 2025 00:40:53 +0000 (0:00:00.135) 0:00:12.008 **** 2025-09-04 00:40:57.631200 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631211 | orchestrator | 2025-09-04 00:40:57.631222 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-04 00:40:57.631233 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.147) 0:00:12.156 **** 2025-09-04 00:40:57.631243 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:40:57.631254 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:40:57.631265 | orchestrator |  "sdb": { 2025-09-04 00:40:57.631276 | orchestrator |  "osd_lvm_uuid": "e3ae8186-8e2b-5882-a4eb-fc7766e33302" 2025-09-04 00:40:57.631286 | orchestrator |  }, 2025-09-04 00:40:57.631297 | orchestrator |  "sdc": { 2025-09-04 00:40:57.631308 | orchestrator |  "osd_lvm_uuid": "7cbfcabc-5503-586a-bc65-b738adbafdd0" 2025-09-04 00:40:57.631318 | orchestrator |  } 2025-09-04 00:40:57.631329 | orchestrator |  } 2025-09-04 00:40:57.631340 | orchestrator | } 2025-09-04 00:40:57.631351 | orchestrator | 2025-09-04 00:40:57.631361 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-04 00:40:57.631372 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.149) 0:00:12.305 **** 2025-09-04 00:40:57.631382 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631393 | orchestrator | 2025-09-04 00:40:57.631404 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-04 00:40:57.631414 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.137) 0:00:12.443 **** 2025-09-04 00:40:57.631425 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631436 | orchestrator | 2025-09-04 00:40:57.631446 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-04 00:40:57.631457 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.135) 0:00:12.578 **** 2025-09-04 00:40:57.631467 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:40:57.631478 | orchestrator | 2025-09-04 00:40:57.631489 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-04 00:40:57.631499 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.117) 0:00:12.696 **** 2025-09-04 00:40:57.631510 | orchestrator | changed: [testbed-node-3] => { 2025-09-04 00:40:57.631521 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-04 00:40:57.631531 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:40:57.631542 | orchestrator |  "sdb": { 2025-09-04 00:40:57.631553 | orchestrator |  "osd_lvm_uuid": "e3ae8186-8e2b-5882-a4eb-fc7766e33302" 2025-09-04 00:40:57.631564 | orchestrator |  }, 2025-09-04 00:40:57.631574 | orchestrator |  "sdc": { 2025-09-04 00:40:57.631585 | orchestrator |  "osd_lvm_uuid": "7cbfcabc-5503-586a-bc65-b738adbafdd0" 2025-09-04 00:40:57.631596 | orchestrator |  } 2025-09-04 00:40:57.631606 | orchestrator |  }, 2025-09-04 00:40:57.631617 | orchestrator |  "lvm_volumes": [ 2025-09-04 00:40:57.631628 | orchestrator |  { 2025-09-04 00:40:57.631638 | orchestrator |  "data": "osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302", 2025-09-04 00:40:57.631649 | orchestrator |  "data_vg": "ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302" 2025-09-04 00:40:57.631659 | orchestrator |  }, 2025-09-04 00:40:57.631670 | orchestrator |  { 2025-09-04 00:40:57.631681 | orchestrator |  "data": "osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0", 2025-09-04 00:40:57.631692 | orchestrator |  "data_vg": "ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0" 2025-09-04 00:40:57.631702 | orchestrator |  } 2025-09-04 00:40:57.631713 | orchestrator |  ] 2025-09-04 00:40:57.631723 | orchestrator |  } 2025-09-04 00:40:57.631734 | orchestrator | } 2025-09-04 00:40:57.631745 | orchestrator | 2025-09-04 00:40:57.631760 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-04 00:40:57.631783 | orchestrator | Thursday 04 September 2025 00:40:54 +0000 (0:00:00.220) 0:00:12.917 **** 2025-09-04 00:40:57.631795 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-04 00:40:57.631805 | orchestrator | 2025-09-04 00:40:57.631816 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-04 00:40:57.631827 | orchestrator | 2025-09-04 00:40:57.631838 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:40:57.631848 | orchestrator | Thursday 04 September 2025 00:40:57 +0000 (0:00:02.229) 0:00:15.147 **** 2025-09-04 00:40:57.631859 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-04 00:40:57.631870 | orchestrator | 2025-09-04 00:40:57.631880 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:40:57.631915 | orchestrator | Thursday 04 September 2025 00:40:57 +0000 (0:00:00.289) 0:00:15.437 **** 2025-09-04 00:40:57.631928 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:40:57.631939 | orchestrator | 2025-09-04 00:40:57.631950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:40:57.631969 | orchestrator | Thursday 04 September 2025 00:40:57 +0000 (0:00:00.247) 0:00:15.684 **** 2025-09-04 00:41:05.647194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-04 00:41:05.647295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-04 00:41:05.647310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-04 00:41:05.647322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-04 00:41:05.647332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-04 00:41:05.647343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-04 00:41:05.647354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-04 00:41:05.647365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-04 00:41:05.647375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-04 00:41:05.647386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-04 00:41:05.647396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-04 00:41:05.647407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-04 00:41:05.647417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-04 00:41:05.647432 | orchestrator | 2025-09-04 00:41:05.647444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647456 | orchestrator | Thursday 04 September 2025 00:40:58 +0000 (0:00:00.389) 0:00:16.074 **** 2025-09-04 00:41:05.647467 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647479 | orchestrator | 2025-09-04 00:41:05.647489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647500 | orchestrator | Thursday 04 September 2025 00:40:58 +0000 (0:00:00.202) 0:00:16.276 **** 2025-09-04 00:41:05.647511 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647521 | orchestrator | 2025-09-04 00:41:05.647532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647543 | orchestrator | Thursday 04 September 2025 00:40:58 +0000 (0:00:00.195) 0:00:16.472 **** 2025-09-04 00:41:05.647553 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647564 | orchestrator | 2025-09-04 00:41:05.647575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647586 | orchestrator | Thursday 04 September 2025 00:40:58 +0000 (0:00:00.203) 0:00:16.675 **** 2025-09-04 00:41:05.647596 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647629 | orchestrator | 2025-09-04 00:41:05.647640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647651 | orchestrator | Thursday 04 September 2025 00:40:58 +0000 (0:00:00.196) 0:00:16.872 **** 2025-09-04 00:41:05.647662 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647672 | orchestrator | 2025-09-04 00:41:05.647683 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647694 | orchestrator | Thursday 04 September 2025 00:40:59 +0000 (0:00:00.594) 0:00:17.467 **** 2025-09-04 00:41:05.647704 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647715 | orchestrator | 2025-09-04 00:41:05.647726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647736 | orchestrator | Thursday 04 September 2025 00:40:59 +0000 (0:00:00.190) 0:00:17.657 **** 2025-09-04 00:41:05.647762 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647774 | orchestrator | 2025-09-04 00:41:05.647784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647795 | orchestrator | Thursday 04 September 2025 00:40:59 +0000 (0:00:00.221) 0:00:17.879 **** 2025-09-04 00:41:05.647806 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.647816 | orchestrator | 2025-09-04 00:41:05.647827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647838 | orchestrator | Thursday 04 September 2025 00:41:00 +0000 (0:00:00.214) 0:00:18.094 **** 2025-09-04 00:41:05.647849 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d) 2025-09-04 00:41:05.647861 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d) 2025-09-04 00:41:05.647871 | orchestrator | 2025-09-04 00:41:05.647882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647928 | orchestrator | Thursday 04 September 2025 00:41:00 +0000 (0:00:00.470) 0:00:18.565 **** 2025-09-04 00:41:05.647940 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f) 2025-09-04 00:41:05.647951 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f) 2025-09-04 00:41:05.647962 | orchestrator | 2025-09-04 00:41:05.647973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.647983 | orchestrator | Thursday 04 September 2025 00:41:00 +0000 (0:00:00.420) 0:00:18.986 **** 2025-09-04 00:41:05.647994 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6) 2025-09-04 00:41:05.648005 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6) 2025-09-04 00:41:05.648016 | orchestrator | 2025-09-04 00:41:05.648027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.648038 | orchestrator | Thursday 04 September 2025 00:41:01 +0000 (0:00:00.438) 0:00:19.424 **** 2025-09-04 00:41:05.648066 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77) 2025-09-04 00:41:05.648078 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77) 2025-09-04 00:41:05.648089 | orchestrator | 2025-09-04 00:41:05.648100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:05.648111 | orchestrator | Thursday 04 September 2025 00:41:01 +0000 (0:00:00.501) 0:00:19.926 **** 2025-09-04 00:41:05.648122 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:41:05.648133 | orchestrator | 2025-09-04 00:41:05.648144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648155 | orchestrator | Thursday 04 September 2025 00:41:02 +0000 (0:00:00.316) 0:00:20.243 **** 2025-09-04 00:41:05.648165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-04 00:41:05.648185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-04 00:41:05.648196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-04 00:41:05.648206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-04 00:41:05.648217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-04 00:41:05.648228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-04 00:41:05.648239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-04 00:41:05.648250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-04 00:41:05.648260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-04 00:41:05.648271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-04 00:41:05.648281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-04 00:41:05.648292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-04 00:41:05.648303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-04 00:41:05.648313 | orchestrator | 2025-09-04 00:41:05.648324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648335 | orchestrator | Thursday 04 September 2025 00:41:02 +0000 (0:00:00.430) 0:00:20.673 **** 2025-09-04 00:41:05.648346 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648356 | orchestrator | 2025-09-04 00:41:05.648367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648378 | orchestrator | Thursday 04 September 2025 00:41:02 +0000 (0:00:00.201) 0:00:20.874 **** 2025-09-04 00:41:05.648389 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648399 | orchestrator | 2025-09-04 00:41:05.648416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648427 | orchestrator | Thursday 04 September 2025 00:41:03 +0000 (0:00:00.654) 0:00:21.528 **** 2025-09-04 00:41:05.648437 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648448 | orchestrator | 2025-09-04 00:41:05.648459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648470 | orchestrator | Thursday 04 September 2025 00:41:03 +0000 (0:00:00.213) 0:00:21.742 **** 2025-09-04 00:41:05.648481 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648491 | orchestrator | 2025-09-04 00:41:05.648502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648513 | orchestrator | Thursday 04 September 2025 00:41:03 +0000 (0:00:00.202) 0:00:21.944 **** 2025-09-04 00:41:05.648524 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648535 | orchestrator | 2025-09-04 00:41:05.648545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648556 | orchestrator | Thursday 04 September 2025 00:41:04 +0000 (0:00:00.208) 0:00:22.153 **** 2025-09-04 00:41:05.648567 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648577 | orchestrator | 2025-09-04 00:41:05.648588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648599 | orchestrator | Thursday 04 September 2025 00:41:04 +0000 (0:00:00.201) 0:00:22.354 **** 2025-09-04 00:41:05.648609 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648620 | orchestrator | 2025-09-04 00:41:05.648631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648642 | orchestrator | Thursday 04 September 2025 00:41:04 +0000 (0:00:00.203) 0:00:22.558 **** 2025-09-04 00:41:05.648652 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648663 | orchestrator | 2025-09-04 00:41:05.648674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648690 | orchestrator | Thursday 04 September 2025 00:41:04 +0000 (0:00:00.283) 0:00:22.842 **** 2025-09-04 00:41:05.648701 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-04 00:41:05.648712 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-04 00:41:05.648723 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-04 00:41:05.648734 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-04 00:41:05.648745 | orchestrator | 2025-09-04 00:41:05.648756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:05.648767 | orchestrator | Thursday 04 September 2025 00:41:05 +0000 (0:00:00.660) 0:00:23.502 **** 2025-09-04 00:41:05.648778 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:05.648789 | orchestrator | 2025-09-04 00:41:05.648806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:12.126174 | orchestrator | Thursday 04 September 2025 00:41:05 +0000 (0:00:00.202) 0:00:23.705 **** 2025-09-04 00:41:12.126272 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126288 | orchestrator | 2025-09-04 00:41:12.126300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:12.126312 | orchestrator | Thursday 04 September 2025 00:41:05 +0000 (0:00:00.216) 0:00:23.922 **** 2025-09-04 00:41:12.126322 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126333 | orchestrator | 2025-09-04 00:41:12.126344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:12.126355 | orchestrator | Thursday 04 September 2025 00:41:06 +0000 (0:00:00.202) 0:00:24.124 **** 2025-09-04 00:41:12.126368 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126387 | orchestrator | 2025-09-04 00:41:12.126405 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-04 00:41:12.126423 | orchestrator | Thursday 04 September 2025 00:41:06 +0000 (0:00:00.201) 0:00:24.326 **** 2025-09-04 00:41:12.126443 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-04 00:41:12.126463 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-04 00:41:12.126474 | orchestrator | 2025-09-04 00:41:12.126485 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-04 00:41:12.126496 | orchestrator | Thursday 04 September 2025 00:41:06 +0000 (0:00:00.368) 0:00:24.695 **** 2025-09-04 00:41:12.126506 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126517 | orchestrator | 2025-09-04 00:41:12.126527 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-04 00:41:12.126539 | orchestrator | Thursday 04 September 2025 00:41:06 +0000 (0:00:00.121) 0:00:24.816 **** 2025-09-04 00:41:12.126550 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126560 | orchestrator | 2025-09-04 00:41:12.126579 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-04 00:41:12.126597 | orchestrator | Thursday 04 September 2025 00:41:06 +0000 (0:00:00.129) 0:00:24.946 **** 2025-09-04 00:41:12.126615 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126633 | orchestrator | 2025-09-04 00:41:12.126651 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-04 00:41:12.126671 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.127) 0:00:25.074 **** 2025-09-04 00:41:12.126690 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:41:12.126711 | orchestrator | 2025-09-04 00:41:12.126730 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-04 00:41:12.126750 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.142) 0:00:25.216 **** 2025-09-04 00:41:12.126764 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23145802-20b8-57b1-85d3-97c3024bf9a4'}}) 2025-09-04 00:41:12.126777 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}}) 2025-09-04 00:41:12.126790 | orchestrator | 2025-09-04 00:41:12.126802 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-04 00:41:12.126836 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.168) 0:00:25.384 **** 2025-09-04 00:41:12.126849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23145802-20b8-57b1-85d3-97c3024bf9a4'}})  2025-09-04 00:41:12.126862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}})  2025-09-04 00:41:12.126875 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126909 | orchestrator | 2025-09-04 00:41:12.126935 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-04 00:41:12.126947 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.149) 0:00:25.534 **** 2025-09-04 00:41:12.126958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23145802-20b8-57b1-85d3-97c3024bf9a4'}})  2025-09-04 00:41:12.126968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}})  2025-09-04 00:41:12.126979 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.126990 | orchestrator | 2025-09-04 00:41:12.127000 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-04 00:41:12.127011 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.148) 0:00:25.682 **** 2025-09-04 00:41:12.127022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23145802-20b8-57b1-85d3-97c3024bf9a4'}})  2025-09-04 00:41:12.127033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}})  2025-09-04 00:41:12.127044 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127055 | orchestrator | 2025-09-04 00:41:12.127065 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-04 00:41:12.127076 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.135) 0:00:25.818 **** 2025-09-04 00:41:12.127087 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:41:12.127098 | orchestrator | 2025-09-04 00:41:12.127108 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-04 00:41:12.127118 | orchestrator | Thursday 04 September 2025 00:41:07 +0000 (0:00:00.151) 0:00:25.970 **** 2025-09-04 00:41:12.127129 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:41:12.127140 | orchestrator | 2025-09-04 00:41:12.127150 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-04 00:41:12.127161 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.134) 0:00:26.104 **** 2025-09-04 00:41:12.127172 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127182 | orchestrator | 2025-09-04 00:41:12.127211 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-04 00:41:12.127223 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.124) 0:00:26.229 **** 2025-09-04 00:41:12.127233 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127244 | orchestrator | 2025-09-04 00:41:12.127255 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-04 00:41:12.127265 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.359) 0:00:26.588 **** 2025-09-04 00:41:12.127276 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127287 | orchestrator | 2025-09-04 00:41:12.127297 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-04 00:41:12.127308 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.135) 0:00:26.724 **** 2025-09-04 00:41:12.127318 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:41:12.127329 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:41:12.127340 | orchestrator |  "sdb": { 2025-09-04 00:41:12.127351 | orchestrator |  "osd_lvm_uuid": "23145802-20b8-57b1-85d3-97c3024bf9a4" 2025-09-04 00:41:12.127362 | orchestrator |  }, 2025-09-04 00:41:12.127372 | orchestrator |  "sdc": { 2025-09-04 00:41:12.127391 | orchestrator |  "osd_lvm_uuid": "f7bb41d6-3a07-594b-a5ed-81bcd3eca58b" 2025-09-04 00:41:12.127401 | orchestrator |  } 2025-09-04 00:41:12.127412 | orchestrator |  } 2025-09-04 00:41:12.127423 | orchestrator | } 2025-09-04 00:41:12.127433 | orchestrator | 2025-09-04 00:41:12.127444 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-04 00:41:12.127455 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.129) 0:00:26.854 **** 2025-09-04 00:41:12.127466 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127476 | orchestrator | 2025-09-04 00:41:12.127495 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-04 00:41:12.127514 | orchestrator | Thursday 04 September 2025 00:41:08 +0000 (0:00:00.128) 0:00:26.983 **** 2025-09-04 00:41:12.127532 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127551 | orchestrator | 2025-09-04 00:41:12.127570 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-04 00:41:12.127585 | orchestrator | Thursday 04 September 2025 00:41:09 +0000 (0:00:00.139) 0:00:27.122 **** 2025-09-04 00:41:12.127596 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:41:12.127606 | orchestrator | 2025-09-04 00:41:12.127617 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-04 00:41:12.127627 | orchestrator | Thursday 04 September 2025 00:41:09 +0000 (0:00:00.132) 0:00:27.255 **** 2025-09-04 00:41:12.127638 | orchestrator | changed: [testbed-node-4] => { 2025-09-04 00:41:12.127648 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-04 00:41:12.127659 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:41:12.127670 | orchestrator |  "sdb": { 2025-09-04 00:41:12.127681 | orchestrator |  "osd_lvm_uuid": "23145802-20b8-57b1-85d3-97c3024bf9a4" 2025-09-04 00:41:12.127691 | orchestrator |  }, 2025-09-04 00:41:12.127702 | orchestrator |  "sdc": { 2025-09-04 00:41:12.127713 | orchestrator |  "osd_lvm_uuid": "f7bb41d6-3a07-594b-a5ed-81bcd3eca58b" 2025-09-04 00:41:12.127724 | orchestrator |  } 2025-09-04 00:41:12.127734 | orchestrator |  }, 2025-09-04 00:41:12.127745 | orchestrator |  "lvm_volumes": [ 2025-09-04 00:41:12.127755 | orchestrator |  { 2025-09-04 00:41:12.127766 | orchestrator |  "data": "osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4", 2025-09-04 00:41:12.127777 | orchestrator |  "data_vg": "ceph-23145802-20b8-57b1-85d3-97c3024bf9a4" 2025-09-04 00:41:12.127787 | orchestrator |  }, 2025-09-04 00:41:12.127797 | orchestrator |  { 2025-09-04 00:41:12.127812 | orchestrator |  "data": "osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b", 2025-09-04 00:41:12.127832 | orchestrator |  "data_vg": "ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b" 2025-09-04 00:41:12.127846 | orchestrator |  } 2025-09-04 00:41:12.127857 | orchestrator |  ] 2025-09-04 00:41:12.127869 | orchestrator |  } 2025-09-04 00:41:12.127919 | orchestrator | } 2025-09-04 00:41:12.127932 | orchestrator | 2025-09-04 00:41:12.127943 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-04 00:41:12.127954 | orchestrator | Thursday 04 September 2025 00:41:09 +0000 (0:00:00.204) 0:00:27.459 **** 2025-09-04 00:41:12.127964 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-04 00:41:12.127975 | orchestrator | 2025-09-04 00:41:12.127985 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-04 00:41:12.127996 | orchestrator | 2025-09-04 00:41:12.128006 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:41:12.128017 | orchestrator | Thursday 04 September 2025 00:41:10 +0000 (0:00:01.094) 0:00:28.554 **** 2025-09-04 00:41:12.128027 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-04 00:41:12.128038 | orchestrator | 2025-09-04 00:41:12.128048 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:41:12.128059 | orchestrator | Thursday 04 September 2025 00:41:11 +0000 (0:00:00.520) 0:00:29.074 **** 2025-09-04 00:41:12.128078 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:41:12.128089 | orchestrator | 2025-09-04 00:41:12.128106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:12.128117 | orchestrator | Thursday 04 September 2025 00:41:11 +0000 (0:00:00.731) 0:00:29.806 **** 2025-09-04 00:41:12.128128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-04 00:41:12.128139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-04 00:41:12.128149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-04 00:41:12.128159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-04 00:41:12.128170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-04 00:41:12.128180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-04 00:41:12.128199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-04 00:41:19.991803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-04 00:41:19.991877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-04 00:41:19.991917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-04 00:41:19.991926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-04 00:41:19.991934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-04 00:41:19.991941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-04 00:41:19.991950 | orchestrator | 2025-09-04 00:41:19.991958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.991967 | orchestrator | Thursday 04 September 2025 00:41:12 +0000 (0:00:00.374) 0:00:30.180 **** 2025-09-04 00:41:19.991974 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.991983 | orchestrator | 2025-09-04 00:41:19.991991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.991998 | orchestrator | Thursday 04 September 2025 00:41:12 +0000 (0:00:00.207) 0:00:30.387 **** 2025-09-04 00:41:19.992006 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992014 | orchestrator | 2025-09-04 00:41:19.992021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992029 | orchestrator | Thursday 04 September 2025 00:41:12 +0000 (0:00:00.259) 0:00:30.646 **** 2025-09-04 00:41:19.992037 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992044 | orchestrator | 2025-09-04 00:41:19.992052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992060 | orchestrator | Thursday 04 September 2025 00:41:12 +0000 (0:00:00.189) 0:00:30.836 **** 2025-09-04 00:41:19.992068 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992075 | orchestrator | 2025-09-04 00:41:19.992083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992091 | orchestrator | Thursday 04 September 2025 00:41:12 +0000 (0:00:00.193) 0:00:31.030 **** 2025-09-04 00:41:19.992098 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992106 | orchestrator | 2025-09-04 00:41:19.992114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992121 | orchestrator | Thursday 04 September 2025 00:41:13 +0000 (0:00:00.234) 0:00:31.264 **** 2025-09-04 00:41:19.992129 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992137 | orchestrator | 2025-09-04 00:41:19.992144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992152 | orchestrator | Thursday 04 September 2025 00:41:13 +0000 (0:00:00.192) 0:00:31.457 **** 2025-09-04 00:41:19.992160 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992188 | orchestrator | 2025-09-04 00:41:19.992196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992204 | orchestrator | Thursday 04 September 2025 00:41:13 +0000 (0:00:00.236) 0:00:31.694 **** 2025-09-04 00:41:19.992211 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992219 | orchestrator | 2025-09-04 00:41:19.992227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992235 | orchestrator | Thursday 04 September 2025 00:41:13 +0000 (0:00:00.214) 0:00:31.908 **** 2025-09-04 00:41:19.992243 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb) 2025-09-04 00:41:19.992251 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb) 2025-09-04 00:41:19.992259 | orchestrator | 2025-09-04 00:41:19.992267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992275 | orchestrator | Thursday 04 September 2025 00:41:14 +0000 (0:00:00.650) 0:00:32.559 **** 2025-09-04 00:41:19.992282 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1) 2025-09-04 00:41:19.992290 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1) 2025-09-04 00:41:19.992298 | orchestrator | 2025-09-04 00:41:19.992306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992314 | orchestrator | Thursday 04 September 2025 00:41:15 +0000 (0:00:00.677) 0:00:33.237 **** 2025-09-04 00:41:19.992321 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a) 2025-09-04 00:41:19.992329 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a) 2025-09-04 00:41:19.992337 | orchestrator | 2025-09-04 00:41:19.992345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992352 | orchestrator | Thursday 04 September 2025 00:41:15 +0000 (0:00:00.340) 0:00:33.577 **** 2025-09-04 00:41:19.992360 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e) 2025-09-04 00:41:19.992368 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e) 2025-09-04 00:41:19.992378 | orchestrator | 2025-09-04 00:41:19.992387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:41:19.992396 | orchestrator | Thursday 04 September 2025 00:41:15 +0000 (0:00:00.410) 0:00:33.988 **** 2025-09-04 00:41:19.992406 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:41:19.992415 | orchestrator | 2025-09-04 00:41:19.992424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992434 | orchestrator | Thursday 04 September 2025 00:41:16 +0000 (0:00:00.310) 0:00:34.299 **** 2025-09-04 00:41:19.992455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-04 00:41:19.992465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-04 00:41:19.992474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-04 00:41:19.992483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-04 00:41:19.992492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-04 00:41:19.992502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-04 00:41:19.992522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-04 00:41:19.992531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-04 00:41:19.992540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-04 00:41:19.992554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-04 00:41:19.992564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-04 00:41:19.992573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-04 00:41:19.992581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-04 00:41:19.992591 | orchestrator | 2025-09-04 00:41:19.992600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992608 | orchestrator | Thursday 04 September 2025 00:41:16 +0000 (0:00:00.363) 0:00:34.663 **** 2025-09-04 00:41:19.992618 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992626 | orchestrator | 2025-09-04 00:41:19.992636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992644 | orchestrator | Thursday 04 September 2025 00:41:16 +0000 (0:00:00.200) 0:00:34.863 **** 2025-09-04 00:41:19.992653 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992662 | orchestrator | 2025-09-04 00:41:19.992671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992680 | orchestrator | Thursday 04 September 2025 00:41:16 +0000 (0:00:00.185) 0:00:35.049 **** 2025-09-04 00:41:19.992689 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992698 | orchestrator | 2025-09-04 00:41:19.992710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992719 | orchestrator | Thursday 04 September 2025 00:41:17 +0000 (0:00:00.190) 0:00:35.239 **** 2025-09-04 00:41:19.992728 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992736 | orchestrator | 2025-09-04 00:41:19.992744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992752 | orchestrator | Thursday 04 September 2025 00:41:17 +0000 (0:00:00.186) 0:00:35.425 **** 2025-09-04 00:41:19.992759 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992767 | orchestrator | 2025-09-04 00:41:19.992775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992782 | orchestrator | Thursday 04 September 2025 00:41:17 +0000 (0:00:00.188) 0:00:35.614 **** 2025-09-04 00:41:19.992790 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992798 | orchestrator | 2025-09-04 00:41:19.992805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992813 | orchestrator | Thursday 04 September 2025 00:41:18 +0000 (0:00:00.523) 0:00:36.138 **** 2025-09-04 00:41:19.992821 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992829 | orchestrator | 2025-09-04 00:41:19.992836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992844 | orchestrator | Thursday 04 September 2025 00:41:18 +0000 (0:00:00.188) 0:00:36.326 **** 2025-09-04 00:41:19.992852 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992859 | orchestrator | 2025-09-04 00:41:19.992867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992875 | orchestrator | Thursday 04 September 2025 00:41:18 +0000 (0:00:00.183) 0:00:36.509 **** 2025-09-04 00:41:19.992896 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-04 00:41:19.992905 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-04 00:41:19.992913 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-04 00:41:19.992921 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-04 00:41:19.992928 | orchestrator | 2025-09-04 00:41:19.992936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992944 | orchestrator | Thursday 04 September 2025 00:41:19 +0000 (0:00:00.675) 0:00:37.185 **** 2025-09-04 00:41:19.992952 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992960 | orchestrator | 2025-09-04 00:41:19.992968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.992981 | orchestrator | Thursday 04 September 2025 00:41:19 +0000 (0:00:00.208) 0:00:37.393 **** 2025-09-04 00:41:19.992989 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.992997 | orchestrator | 2025-09-04 00:41:19.993005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.993013 | orchestrator | Thursday 04 September 2025 00:41:19 +0000 (0:00:00.207) 0:00:37.601 **** 2025-09-04 00:41:19.993020 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.993028 | orchestrator | 2025-09-04 00:41:19.993036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:41:19.993044 | orchestrator | Thursday 04 September 2025 00:41:19 +0000 (0:00:00.229) 0:00:37.830 **** 2025-09-04 00:41:19.993051 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:19.993059 | orchestrator | 2025-09-04 00:41:19.993067 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-04 00:41:19.993079 | orchestrator | Thursday 04 September 2025 00:41:19 +0000 (0:00:00.220) 0:00:38.051 **** 2025-09-04 00:41:24.722983 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-04 00:41:24.723071 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-04 00:41:24.723086 | orchestrator | 2025-09-04 00:41:24.723098 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-04 00:41:24.723109 | orchestrator | Thursday 04 September 2025 00:41:20 +0000 (0:00:00.166) 0:00:38.217 **** 2025-09-04 00:41:24.723120 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723131 | orchestrator | 2025-09-04 00:41:24.723143 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-04 00:41:24.723154 | orchestrator | Thursday 04 September 2025 00:41:20 +0000 (0:00:00.144) 0:00:38.362 **** 2025-09-04 00:41:24.723164 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723175 | orchestrator | 2025-09-04 00:41:24.723185 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-04 00:41:24.723196 | orchestrator | Thursday 04 September 2025 00:41:20 +0000 (0:00:00.260) 0:00:38.622 **** 2025-09-04 00:41:24.723207 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723217 | orchestrator | 2025-09-04 00:41:24.723228 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-04 00:41:24.723239 | orchestrator | Thursday 04 September 2025 00:41:20 +0000 (0:00:00.197) 0:00:38.820 **** 2025-09-04 00:41:24.723249 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:41:24.723261 | orchestrator | 2025-09-04 00:41:24.723271 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-04 00:41:24.723282 | orchestrator | Thursday 04 September 2025 00:41:21 +0000 (0:00:00.375) 0:00:39.196 **** 2025-09-04 00:41:24.723294 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4210a630-ee04-544c-a974-f1a5ed1af07b'}}) 2025-09-04 00:41:24.723305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f811dc1d-ef4c-5806-9269-19449158751d'}}) 2025-09-04 00:41:24.723316 | orchestrator | 2025-09-04 00:41:24.723341 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-04 00:41:24.723352 | orchestrator | Thursday 04 September 2025 00:41:21 +0000 (0:00:00.180) 0:00:39.377 **** 2025-09-04 00:41:24.723363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4210a630-ee04-544c-a974-f1a5ed1af07b'}})  2025-09-04 00:41:24.723383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f811dc1d-ef4c-5806-9269-19449158751d'}})  2025-09-04 00:41:24.723394 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723405 | orchestrator | 2025-09-04 00:41:24.723416 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-04 00:41:24.723427 | orchestrator | Thursday 04 September 2025 00:41:21 +0000 (0:00:00.183) 0:00:39.560 **** 2025-09-04 00:41:24.723437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4210a630-ee04-544c-a974-f1a5ed1af07b'}})  2025-09-04 00:41:24.723469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f811dc1d-ef4c-5806-9269-19449158751d'}})  2025-09-04 00:41:24.723480 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723493 | orchestrator | 2025-09-04 00:41:24.723506 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-04 00:41:24.723519 | orchestrator | Thursday 04 September 2025 00:41:21 +0000 (0:00:00.174) 0:00:39.735 **** 2025-09-04 00:41:24.723531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4210a630-ee04-544c-a974-f1a5ed1af07b'}})  2025-09-04 00:41:24.723559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f811dc1d-ef4c-5806-9269-19449158751d'}})  2025-09-04 00:41:24.723572 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723585 | orchestrator | 2025-09-04 00:41:24.723597 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-04 00:41:24.723610 | orchestrator | Thursday 04 September 2025 00:41:21 +0000 (0:00:00.199) 0:00:39.935 **** 2025-09-04 00:41:24.723623 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:41:24.723635 | orchestrator | 2025-09-04 00:41:24.723648 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-04 00:41:24.723661 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.210) 0:00:40.145 **** 2025-09-04 00:41:24.723673 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:41:24.723686 | orchestrator | 2025-09-04 00:41:24.723698 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-04 00:41:24.723712 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.130) 0:00:40.275 **** 2025-09-04 00:41:24.723725 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723737 | orchestrator | 2025-09-04 00:41:24.723749 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-04 00:41:24.723762 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.128) 0:00:40.404 **** 2025-09-04 00:41:24.723774 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723786 | orchestrator | 2025-09-04 00:41:24.723799 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-04 00:41:24.723811 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.124) 0:00:40.528 **** 2025-09-04 00:41:24.723824 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.723836 | orchestrator | 2025-09-04 00:41:24.723849 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-04 00:41:24.723860 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.123) 0:00:40.652 **** 2025-09-04 00:41:24.723871 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:41:24.723899 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:41:24.723911 | orchestrator |  "sdb": { 2025-09-04 00:41:24.723922 | orchestrator |  "osd_lvm_uuid": "4210a630-ee04-544c-a974-f1a5ed1af07b" 2025-09-04 00:41:24.723949 | orchestrator |  }, 2025-09-04 00:41:24.723961 | orchestrator |  "sdc": { 2025-09-04 00:41:24.723972 | orchestrator |  "osd_lvm_uuid": "f811dc1d-ef4c-5806-9269-19449158751d" 2025-09-04 00:41:24.723983 | orchestrator |  } 2025-09-04 00:41:24.723993 | orchestrator |  } 2025-09-04 00:41:24.724004 | orchestrator | } 2025-09-04 00:41:24.724016 | orchestrator | 2025-09-04 00:41:24.724026 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-04 00:41:24.724037 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.136) 0:00:40.788 **** 2025-09-04 00:41:24.724048 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.724059 | orchestrator | 2025-09-04 00:41:24.724069 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-04 00:41:24.724080 | orchestrator | Thursday 04 September 2025 00:41:22 +0000 (0:00:00.128) 0:00:40.917 **** 2025-09-04 00:41:24.724091 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.724101 | orchestrator | 2025-09-04 00:41:24.724112 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-04 00:41:24.724131 | orchestrator | Thursday 04 September 2025 00:41:23 +0000 (0:00:00.338) 0:00:41.255 **** 2025-09-04 00:41:24.724141 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:41:24.724152 | orchestrator | 2025-09-04 00:41:24.724163 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-04 00:41:24.724173 | orchestrator | Thursday 04 September 2025 00:41:23 +0000 (0:00:00.134) 0:00:41.390 **** 2025-09-04 00:41:24.724184 | orchestrator | changed: [testbed-node-5] => { 2025-09-04 00:41:24.724195 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-04 00:41:24.724206 | orchestrator |  "ceph_osd_devices": { 2025-09-04 00:41:24.724217 | orchestrator |  "sdb": { 2025-09-04 00:41:24.724227 | orchestrator |  "osd_lvm_uuid": "4210a630-ee04-544c-a974-f1a5ed1af07b" 2025-09-04 00:41:24.724238 | orchestrator |  }, 2025-09-04 00:41:24.724249 | orchestrator |  "sdc": { 2025-09-04 00:41:24.724260 | orchestrator |  "osd_lvm_uuid": "f811dc1d-ef4c-5806-9269-19449158751d" 2025-09-04 00:41:24.724270 | orchestrator |  } 2025-09-04 00:41:24.724281 | orchestrator |  }, 2025-09-04 00:41:24.724292 | orchestrator |  "lvm_volumes": [ 2025-09-04 00:41:24.724303 | orchestrator |  { 2025-09-04 00:41:24.724313 | orchestrator |  "data": "osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b", 2025-09-04 00:41:24.724324 | orchestrator |  "data_vg": "ceph-4210a630-ee04-544c-a974-f1a5ed1af07b" 2025-09-04 00:41:24.724335 | orchestrator |  }, 2025-09-04 00:41:24.724346 | orchestrator |  { 2025-09-04 00:41:24.724356 | orchestrator |  "data": "osd-block-f811dc1d-ef4c-5806-9269-19449158751d", 2025-09-04 00:41:24.724367 | orchestrator |  "data_vg": "ceph-f811dc1d-ef4c-5806-9269-19449158751d" 2025-09-04 00:41:24.724378 | orchestrator |  } 2025-09-04 00:41:24.724389 | orchestrator |  ] 2025-09-04 00:41:24.724400 | orchestrator |  } 2025-09-04 00:41:24.724414 | orchestrator | } 2025-09-04 00:41:24.724425 | orchestrator | 2025-09-04 00:41:24.724436 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-04 00:41:24.724447 | orchestrator | Thursday 04 September 2025 00:41:23 +0000 (0:00:00.219) 0:00:41.609 **** 2025-09-04 00:41:24.724457 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-04 00:41:24.724468 | orchestrator | 2025-09-04 00:41:24.724479 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:41:24.724490 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 00:41:24.724502 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 00:41:24.724513 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 00:41:24.724524 | orchestrator | 2025-09-04 00:41:24.724535 | orchestrator | 2025-09-04 00:41:24.724545 | orchestrator | 2025-09-04 00:41:24.724556 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:41:24.724567 | orchestrator | Thursday 04 September 2025 00:41:24 +0000 (0:00:01.157) 0:00:42.767 **** 2025-09-04 00:41:24.724577 | orchestrator | =============================================================================== 2025-09-04 00:41:24.724588 | orchestrator | Write configuration file ------------------------------------------------ 4.48s 2025-09-04 00:41:24.724599 | orchestrator | Get initial list of available block devices ----------------------------- 1.20s 2025-09-04 00:41:24.724609 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-09-04 00:41:24.724620 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2025-09-04 00:41:24.724631 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2025-09-04 00:41:24.724649 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2025-09-04 00:41:24.724659 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-09-04 00:41:24.724670 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.71s 2025-09-04 00:41:24.724681 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-09-04 00:41:24.724691 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-09-04 00:41:24.724702 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.67s 2025-09-04 00:41:24.724713 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2025-09-04 00:41:24.724724 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-04 00:41:24.724734 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-04 00:41:24.724752 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-04 00:41:24.973949 | orchestrator | Print configuration data ------------------------------------------------ 0.64s 2025-09-04 00:41:24.974150 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-04 00:41:24.974178 | orchestrator | Set WAL devices config data --------------------------------------------- 0.62s 2025-09-04 00:41:24.974190 | orchestrator | Print DB devices -------------------------------------------------------- 0.61s 2025-09-04 00:41:24.974201 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-04 00:41:47.417253 | orchestrator | 2025-09-04 00:41:47 | INFO  | Task 4a2fef06-2be0-4aba-a5a7-0dd314773a69 (sync inventory) is running in background. Output coming soon. 2025-09-04 00:42:14.478807 | orchestrator | 2025-09-04 00:41:48 | INFO  | Starting group_vars file reorganization 2025-09-04 00:42:14.478977 | orchestrator | 2025-09-04 00:41:48 | INFO  | Moved 0 file(s) to their respective directories 2025-09-04 00:42:14.478997 | orchestrator | 2025-09-04 00:41:48 | INFO  | Group_vars file reorganization completed 2025-09-04 00:42:14.479009 | orchestrator | 2025-09-04 00:41:51 | INFO  | Starting variable preparation from inventory 2025-09-04 00:42:14.479021 | orchestrator | 2025-09-04 00:41:55 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-04 00:42:14.479032 | orchestrator | 2025-09-04 00:41:55 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-04 00:42:14.479043 | orchestrator | 2025-09-04 00:41:55 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-04 00:42:14.479073 | orchestrator | 2025-09-04 00:41:55 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-04 00:42:14.479085 | orchestrator | 2025-09-04 00:41:55 | INFO  | Variable preparation completed 2025-09-04 00:42:14.479096 | orchestrator | 2025-09-04 00:41:56 | INFO  | Starting inventory overwrite handling 2025-09-04 00:42:14.479107 | orchestrator | 2025-09-04 00:41:56 | INFO  | Handling group overwrites in 99-overwrite 2025-09-04 00:42:14.479123 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group frr:children from 60-generic 2025-09-04 00:42:14.479134 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group storage:children from 50-kolla 2025-09-04 00:42:14.479145 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-04 00:42:14.479155 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-04 00:42:14.479167 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-04 00:42:14.479177 | orchestrator | 2025-09-04 00:41:56 | INFO  | Handling group overwrites in 20-roles 2025-09-04 00:42:14.479188 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-04 00:42:14.479226 | orchestrator | 2025-09-04 00:41:56 | INFO  | Removed 6 group(s) in total 2025-09-04 00:42:14.479237 | orchestrator | 2025-09-04 00:41:56 | INFO  | Inventory overwrite handling completed 2025-09-04 00:42:14.479247 | orchestrator | 2025-09-04 00:41:57 | INFO  | Starting merge of inventory files 2025-09-04 00:42:14.479258 | orchestrator | 2025-09-04 00:41:57 | INFO  | Inventory files merged successfully 2025-09-04 00:42:14.479268 | orchestrator | 2025-09-04 00:42:02 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-04 00:42:14.479279 | orchestrator | 2025-09-04 00:42:13 | INFO  | Successfully wrote ClusterShell configuration 2025-09-04 00:42:14.479289 | orchestrator | [master 5d18d62] 2025-09-04-00-42 2025-09-04 00:42:14.479301 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-04 00:42:16.419385 | orchestrator | 2025-09-04 00:42:16 | INFO  | Task 6d0dc1e7-1c42-4fbb-bd4f-3475a5ce9201 (ceph-create-lvm-devices) was prepared for execution. 2025-09-04 00:42:16.419483 | orchestrator | 2025-09-04 00:42:16 | INFO  | It takes a moment until task 6d0dc1e7-1c42-4fbb-bd4f-3475a5ce9201 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-04 00:42:28.586474 | orchestrator | 2025-09-04 00:42:28.586592 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-04 00:42:28.586610 | orchestrator | 2025-09-04 00:42:28.586623 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:42:28.586635 | orchestrator | Thursday 04 September 2025 00:42:20 +0000 (0:00:00.322) 0:00:00.322 **** 2025-09-04 00:42:28.586646 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-04 00:42:28.586658 | orchestrator | 2025-09-04 00:42:28.586670 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:42:28.586681 | orchestrator | Thursday 04 September 2025 00:42:20 +0000 (0:00:00.231) 0:00:00.553 **** 2025-09-04 00:42:28.586692 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:28.586704 | orchestrator | 2025-09-04 00:42:28.586715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.586726 | orchestrator | Thursday 04 September 2025 00:42:21 +0000 (0:00:00.243) 0:00:00.796 **** 2025-09-04 00:42:28.586737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-04 00:42:28.586750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-04 00:42:28.586761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-04 00:42:28.586771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-04 00:42:28.586782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-04 00:42:28.586793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-04 00:42:28.586803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-04 00:42:28.586814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-04 00:42:28.586825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-04 00:42:28.586836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-04 00:42:28.586846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-04 00:42:28.586918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-04 00:42:28.586932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-04 00:42:28.586943 | orchestrator | 2025-09-04 00:42:28.586954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.586993 | orchestrator | Thursday 04 September 2025 00:42:21 +0000 (0:00:00.425) 0:00:01.221 **** 2025-09-04 00:42:28.587007 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587019 | orchestrator | 2025-09-04 00:42:28.587032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587044 | orchestrator | Thursday 04 September 2025 00:42:22 +0000 (0:00:00.497) 0:00:01.718 **** 2025-09-04 00:42:28.587057 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587070 | orchestrator | 2025-09-04 00:42:28.587082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587094 | orchestrator | Thursday 04 September 2025 00:42:22 +0000 (0:00:00.206) 0:00:01.925 **** 2025-09-04 00:42:28.587106 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587118 | orchestrator | 2025-09-04 00:42:28.587131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587143 | orchestrator | Thursday 04 September 2025 00:42:22 +0000 (0:00:00.193) 0:00:02.119 **** 2025-09-04 00:42:28.587156 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587168 | orchestrator | 2025-09-04 00:42:28.587180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587192 | orchestrator | Thursday 04 September 2025 00:42:22 +0000 (0:00:00.248) 0:00:02.368 **** 2025-09-04 00:42:28.587204 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587217 | orchestrator | 2025-09-04 00:42:28.587230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587243 | orchestrator | Thursday 04 September 2025 00:42:23 +0000 (0:00:00.323) 0:00:02.692 **** 2025-09-04 00:42:28.587255 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587267 | orchestrator | 2025-09-04 00:42:28.587280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587292 | orchestrator | Thursday 04 September 2025 00:42:23 +0000 (0:00:00.273) 0:00:02.966 **** 2025-09-04 00:42:28.587304 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587317 | orchestrator | 2025-09-04 00:42:28.587329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587342 | orchestrator | Thursday 04 September 2025 00:42:23 +0000 (0:00:00.207) 0:00:03.173 **** 2025-09-04 00:42:28.587354 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587364 | orchestrator | 2025-09-04 00:42:28.587375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587386 | orchestrator | Thursday 04 September 2025 00:42:23 +0000 (0:00:00.212) 0:00:03.386 **** 2025-09-04 00:42:28.587396 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472) 2025-09-04 00:42:28.587408 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472) 2025-09-04 00:42:28.587419 | orchestrator | 2025-09-04 00:42:28.587430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587440 | orchestrator | Thursday 04 September 2025 00:42:24 +0000 (0:00:00.404) 0:00:03.791 **** 2025-09-04 00:42:28.587471 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa) 2025-09-04 00:42:28.587483 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa) 2025-09-04 00:42:28.587494 | orchestrator | 2025-09-04 00:42:28.587505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587516 | orchestrator | Thursday 04 September 2025 00:42:24 +0000 (0:00:00.425) 0:00:04.216 **** 2025-09-04 00:42:28.587526 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc) 2025-09-04 00:42:28.587537 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc) 2025-09-04 00:42:28.587548 | orchestrator | 2025-09-04 00:42:28.587559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587581 | orchestrator | Thursday 04 September 2025 00:42:25 +0000 (0:00:00.629) 0:00:04.846 **** 2025-09-04 00:42:28.587592 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637) 2025-09-04 00:42:28.587603 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637) 2025-09-04 00:42:28.587614 | orchestrator | 2025-09-04 00:42:28.587624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:28.587635 | orchestrator | Thursday 04 September 2025 00:42:26 +0000 (0:00:00.871) 0:00:05.717 **** 2025-09-04 00:42:28.587645 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:42:28.587656 | orchestrator | 2025-09-04 00:42:28.587667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.587677 | orchestrator | Thursday 04 September 2025 00:42:26 +0000 (0:00:00.323) 0:00:06.041 **** 2025-09-04 00:42:28.587688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-04 00:42:28.587698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-04 00:42:28.587709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-04 00:42:28.587720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-04 00:42:28.587748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-04 00:42:28.587760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-04 00:42:28.587770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-04 00:42:28.587781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-04 00:42:28.587791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-04 00:42:28.587802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-04 00:42:28.587812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-04 00:42:28.587823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-04 00:42:28.587838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-04 00:42:28.587850 | orchestrator | 2025-09-04 00:42:28.587889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.587901 | orchestrator | Thursday 04 September 2025 00:42:26 +0000 (0:00:00.421) 0:00:06.463 **** 2025-09-04 00:42:28.587912 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587923 | orchestrator | 2025-09-04 00:42:28.587934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.587944 | orchestrator | Thursday 04 September 2025 00:42:27 +0000 (0:00:00.217) 0:00:06.680 **** 2025-09-04 00:42:28.587955 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.587965 | orchestrator | 2025-09-04 00:42:28.587976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.587987 | orchestrator | Thursday 04 September 2025 00:42:27 +0000 (0:00:00.223) 0:00:06.903 **** 2025-09-04 00:42:28.587997 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.588008 | orchestrator | 2025-09-04 00:42:28.588018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.588029 | orchestrator | Thursday 04 September 2025 00:42:27 +0000 (0:00:00.214) 0:00:07.118 **** 2025-09-04 00:42:28.588039 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.588050 | orchestrator | 2025-09-04 00:42:28.588060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.588079 | orchestrator | Thursday 04 September 2025 00:42:27 +0000 (0:00:00.229) 0:00:07.347 **** 2025-09-04 00:42:28.588089 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.588100 | orchestrator | 2025-09-04 00:42:28.588111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.588121 | orchestrator | Thursday 04 September 2025 00:42:27 +0000 (0:00:00.220) 0:00:07.567 **** 2025-09-04 00:42:28.588132 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.588142 | orchestrator | 2025-09-04 00:42:28.588153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.588163 | orchestrator | Thursday 04 September 2025 00:42:28 +0000 (0:00:00.235) 0:00:07.803 **** 2025-09-04 00:42:28.588174 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:28.588185 | orchestrator | 2025-09-04 00:42:28.588195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:28.588206 | orchestrator | Thursday 04 September 2025 00:42:28 +0000 (0:00:00.225) 0:00:08.029 **** 2025-09-04 00:42:28.588224 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793495 | orchestrator | 2025-09-04 00:42:36.793591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:36.793604 | orchestrator | Thursday 04 September 2025 00:42:28 +0000 (0:00:00.207) 0:00:08.236 **** 2025-09-04 00:42:36.793612 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-04 00:42:36.793621 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-04 00:42:36.793628 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-04 00:42:36.793635 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-04 00:42:36.793642 | orchestrator | 2025-09-04 00:42:36.793649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:36.793656 | orchestrator | Thursday 04 September 2025 00:42:29 +0000 (0:00:01.080) 0:00:09.317 **** 2025-09-04 00:42:36.793663 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793669 | orchestrator | 2025-09-04 00:42:36.793676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:36.793683 | orchestrator | Thursday 04 September 2025 00:42:29 +0000 (0:00:00.209) 0:00:09.527 **** 2025-09-04 00:42:36.793690 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793696 | orchestrator | 2025-09-04 00:42:36.793703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:36.793710 | orchestrator | Thursday 04 September 2025 00:42:30 +0000 (0:00:00.193) 0:00:09.721 **** 2025-09-04 00:42:36.793716 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793723 | orchestrator | 2025-09-04 00:42:36.793730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:42:36.793737 | orchestrator | Thursday 04 September 2025 00:42:30 +0000 (0:00:00.199) 0:00:09.920 **** 2025-09-04 00:42:36.793744 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793750 | orchestrator | 2025-09-04 00:42:36.793757 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-04 00:42:36.793764 | orchestrator | Thursday 04 September 2025 00:42:30 +0000 (0:00:00.196) 0:00:10.117 **** 2025-09-04 00:42:36.793770 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793777 | orchestrator | 2025-09-04 00:42:36.793784 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-04 00:42:36.793790 | orchestrator | Thursday 04 September 2025 00:42:30 +0000 (0:00:00.145) 0:00:10.263 **** 2025-09-04 00:42:36.793798 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e3ae8186-8e2b-5882-a4eb-fc7766e33302'}}) 2025-09-04 00:42:36.793806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7cbfcabc-5503-586a-bc65-b738adbafdd0'}}) 2025-09-04 00:42:36.793812 | orchestrator | 2025-09-04 00:42:36.793819 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-04 00:42:36.793826 | orchestrator | Thursday 04 September 2025 00:42:30 +0000 (0:00:00.211) 0:00:10.475 **** 2025-09-04 00:42:36.793834 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'}) 2025-09-04 00:42:36.793901 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'}) 2025-09-04 00:42:36.793909 | orchestrator | 2025-09-04 00:42:36.793916 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-04 00:42:36.793923 | orchestrator | Thursday 04 September 2025 00:42:32 +0000 (0:00:02.083) 0:00:12.558 **** 2025-09-04 00:42:36.793930 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.793938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.793945 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.793951 | orchestrator | 2025-09-04 00:42:36.793958 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-04 00:42:36.793964 | orchestrator | Thursday 04 September 2025 00:42:33 +0000 (0:00:00.158) 0:00:12.717 **** 2025-09-04 00:42:36.793971 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'}) 2025-09-04 00:42:36.793978 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'}) 2025-09-04 00:42:36.793984 | orchestrator | 2025-09-04 00:42:36.793991 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-04 00:42:36.793998 | orchestrator | Thursday 04 September 2025 00:42:34 +0000 (0:00:01.478) 0:00:14.196 **** 2025-09-04 00:42:36.794004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794076 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794084 | orchestrator | 2025-09-04 00:42:36.794092 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-04 00:42:36.794100 | orchestrator | Thursday 04 September 2025 00:42:34 +0000 (0:00:00.171) 0:00:14.367 **** 2025-09-04 00:42:36.794108 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794116 | orchestrator | 2025-09-04 00:42:36.794124 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-04 00:42:36.794146 | orchestrator | Thursday 04 September 2025 00:42:34 +0000 (0:00:00.152) 0:00:14.520 **** 2025-09-04 00:42:36.794154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794162 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794170 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794178 | orchestrator | 2025-09-04 00:42:36.794185 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-04 00:42:36.794193 | orchestrator | Thursday 04 September 2025 00:42:35 +0000 (0:00:00.429) 0:00:14.950 **** 2025-09-04 00:42:36.794200 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794208 | orchestrator | 2025-09-04 00:42:36.794216 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-04 00:42:36.794224 | orchestrator | Thursday 04 September 2025 00:42:35 +0000 (0:00:00.130) 0:00:15.080 **** 2025-09-04 00:42:36.794232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794254 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794261 | orchestrator | 2025-09-04 00:42:36.794268 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-04 00:42:36.794276 | orchestrator | Thursday 04 September 2025 00:42:35 +0000 (0:00:00.144) 0:00:15.225 **** 2025-09-04 00:42:36.794283 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794291 | orchestrator | 2025-09-04 00:42:36.794299 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-04 00:42:36.794306 | orchestrator | Thursday 04 September 2025 00:42:35 +0000 (0:00:00.171) 0:00:15.397 **** 2025-09-04 00:42:36.794313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794329 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794337 | orchestrator | 2025-09-04 00:42:36.794345 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-04 00:42:36.794352 | orchestrator | Thursday 04 September 2025 00:42:35 +0000 (0:00:00.152) 0:00:15.549 **** 2025-09-04 00:42:36.794359 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:36.794367 | orchestrator | 2025-09-04 00:42:36.794374 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-04 00:42:36.794382 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.138) 0:00:15.688 **** 2025-09-04 00:42:36.794410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794426 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794434 | orchestrator | 2025-09-04 00:42:36.794441 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-04 00:42:36.794447 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.166) 0:00:15.855 **** 2025-09-04 00:42:36.794454 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794467 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794473 | orchestrator | 2025-09-04 00:42:36.794480 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-04 00:42:36.794487 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.162) 0:00:16.017 **** 2025-09-04 00:42:36.794493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:36.794500 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:36.794506 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794513 | orchestrator | 2025-09-04 00:42:36.794519 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-04 00:42:36.794526 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.154) 0:00:16.172 **** 2025-09-04 00:42:36.794532 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794543 | orchestrator | 2025-09-04 00:42:36.794549 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-04 00:42:36.794556 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.129) 0:00:16.301 **** 2025-09-04 00:42:36.794562 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:36.794569 | orchestrator | 2025-09-04 00:42:36.794579 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-04 00:42:43.762270 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.140) 0:00:16.442 **** 2025-09-04 00:42:43.762382 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.762400 | orchestrator | 2025-09-04 00:42:43.762414 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-04 00:42:43.762426 | orchestrator | Thursday 04 September 2025 00:42:36 +0000 (0:00:00.136) 0:00:16.578 **** 2025-09-04 00:42:43.762437 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:42:43.762448 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-04 00:42:43.762460 | orchestrator | } 2025-09-04 00:42:43.762471 | orchestrator | 2025-09-04 00:42:43.762483 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-04 00:42:43.762494 | orchestrator | Thursday 04 September 2025 00:42:37 +0000 (0:00:00.359) 0:00:16.937 **** 2025-09-04 00:42:43.762505 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:42:43.762516 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-04 00:42:43.762526 | orchestrator | } 2025-09-04 00:42:43.762537 | orchestrator | 2025-09-04 00:42:43.762548 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-04 00:42:43.762559 | orchestrator | Thursday 04 September 2025 00:42:37 +0000 (0:00:00.161) 0:00:17.099 **** 2025-09-04 00:42:43.762570 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:42:43.762581 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-04 00:42:43.762592 | orchestrator | } 2025-09-04 00:42:43.762604 | orchestrator | 2025-09-04 00:42:43.762616 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-04 00:42:43.762627 | orchestrator | Thursday 04 September 2025 00:42:37 +0000 (0:00:00.161) 0:00:17.261 **** 2025-09-04 00:42:43.762637 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:43.762649 | orchestrator | 2025-09-04 00:42:43.762660 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-04 00:42:43.762670 | orchestrator | Thursday 04 September 2025 00:42:38 +0000 (0:00:00.784) 0:00:18.045 **** 2025-09-04 00:42:43.762681 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:43.762692 | orchestrator | 2025-09-04 00:42:43.762703 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-04 00:42:43.762714 | orchestrator | Thursday 04 September 2025 00:42:38 +0000 (0:00:00.543) 0:00:18.588 **** 2025-09-04 00:42:43.762725 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:43.762736 | orchestrator | 2025-09-04 00:42:43.762747 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-04 00:42:43.762758 | orchestrator | Thursday 04 September 2025 00:42:39 +0000 (0:00:00.586) 0:00:19.175 **** 2025-09-04 00:42:43.762768 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:43.762779 | orchestrator | 2025-09-04 00:42:43.762790 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-04 00:42:43.762801 | orchestrator | Thursday 04 September 2025 00:42:39 +0000 (0:00:00.179) 0:00:19.355 **** 2025-09-04 00:42:43.762814 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.762827 | orchestrator | 2025-09-04 00:42:43.762839 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-04 00:42:43.762873 | orchestrator | Thursday 04 September 2025 00:42:39 +0000 (0:00:00.112) 0:00:19.467 **** 2025-09-04 00:42:43.762886 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.762899 | orchestrator | 2025-09-04 00:42:43.762912 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-04 00:42:43.762925 | orchestrator | Thursday 04 September 2025 00:42:39 +0000 (0:00:00.107) 0:00:19.575 **** 2025-09-04 00:42:43.762938 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:42:43.762976 | orchestrator |  "vgs_report": { 2025-09-04 00:42:43.763003 | orchestrator |  "vg": [] 2025-09-04 00:42:43.763016 | orchestrator |  } 2025-09-04 00:42:43.763028 | orchestrator | } 2025-09-04 00:42:43.763040 | orchestrator | 2025-09-04 00:42:43.763052 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-04 00:42:43.763064 | orchestrator | Thursday 04 September 2025 00:42:40 +0000 (0:00:00.186) 0:00:19.761 **** 2025-09-04 00:42:43.763076 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763088 | orchestrator | 2025-09-04 00:42:43.763101 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-04 00:42:43.763113 | orchestrator | Thursday 04 September 2025 00:42:40 +0000 (0:00:00.163) 0:00:19.925 **** 2025-09-04 00:42:43.763125 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763137 | orchestrator | 2025-09-04 00:42:43.763150 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-04 00:42:43.763161 | orchestrator | Thursday 04 September 2025 00:42:40 +0000 (0:00:00.139) 0:00:20.064 **** 2025-09-04 00:42:43.763172 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763183 | orchestrator | 2025-09-04 00:42:43.763193 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-04 00:42:43.763204 | orchestrator | Thursday 04 September 2025 00:42:40 +0000 (0:00:00.349) 0:00:20.414 **** 2025-09-04 00:42:43.763214 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763225 | orchestrator | 2025-09-04 00:42:43.763236 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-04 00:42:43.763246 | orchestrator | Thursday 04 September 2025 00:42:40 +0000 (0:00:00.151) 0:00:20.565 **** 2025-09-04 00:42:43.763257 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763268 | orchestrator | 2025-09-04 00:42:43.763278 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-04 00:42:43.763289 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.160) 0:00:20.725 **** 2025-09-04 00:42:43.763299 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763310 | orchestrator | 2025-09-04 00:42:43.763320 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-04 00:42:43.763331 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.173) 0:00:20.899 **** 2025-09-04 00:42:43.763342 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763352 | orchestrator | 2025-09-04 00:42:43.763363 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-04 00:42:43.763373 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.160) 0:00:21.059 **** 2025-09-04 00:42:43.763384 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763395 | orchestrator | 2025-09-04 00:42:43.763406 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-04 00:42:43.763433 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.149) 0:00:21.209 **** 2025-09-04 00:42:43.763445 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763455 | orchestrator | 2025-09-04 00:42:43.763466 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-04 00:42:43.763477 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.139) 0:00:21.348 **** 2025-09-04 00:42:43.763488 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763498 | orchestrator | 2025-09-04 00:42:43.763509 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-04 00:42:43.763520 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.147) 0:00:21.495 **** 2025-09-04 00:42:43.763530 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763541 | orchestrator | 2025-09-04 00:42:43.763552 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-04 00:42:43.763562 | orchestrator | Thursday 04 September 2025 00:42:41 +0000 (0:00:00.139) 0:00:21.635 **** 2025-09-04 00:42:43.763573 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763584 | orchestrator | 2025-09-04 00:42:43.763603 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-04 00:42:43.763614 | orchestrator | Thursday 04 September 2025 00:42:42 +0000 (0:00:00.190) 0:00:21.825 **** 2025-09-04 00:42:43.763625 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763635 | orchestrator | 2025-09-04 00:42:43.763646 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-04 00:42:43.763657 | orchestrator | Thursday 04 September 2025 00:42:42 +0000 (0:00:00.128) 0:00:21.954 **** 2025-09-04 00:42:43.763667 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763678 | orchestrator | 2025-09-04 00:42:43.763689 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-04 00:42:43.763699 | orchestrator | Thursday 04 September 2025 00:42:42 +0000 (0:00:00.182) 0:00:22.137 **** 2025-09-04 00:42:43.763711 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.763723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:43.763733 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763744 | orchestrator | 2025-09-04 00:42:43.763755 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-04 00:42:43.763766 | orchestrator | Thursday 04 September 2025 00:42:42 +0000 (0:00:00.389) 0:00:22.526 **** 2025-09-04 00:42:43.763776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.763787 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:43.763798 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763809 | orchestrator | 2025-09-04 00:42:43.763819 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-04 00:42:43.763830 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.172) 0:00:22.699 **** 2025-09-04 00:42:43.763841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.763888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:43.763901 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763911 | orchestrator | 2025-09-04 00:42:43.763922 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-04 00:42:43.763932 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.197) 0:00:22.896 **** 2025-09-04 00:42:43.763943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.763954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:43.763964 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.763975 | orchestrator | 2025-09-04 00:42:43.763986 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-04 00:42:43.763996 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.170) 0:00:23.067 **** 2025-09-04 00:42:43.764007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.764018 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:43.764029 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:43.764046 | orchestrator | 2025-09-04 00:42:43.764057 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-04 00:42:43.764068 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.177) 0:00:23.245 **** 2025-09-04 00:42:43.764087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:43.764105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.415486 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.415598 | orchestrator | 2025-09-04 00:42:49.415616 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-04 00:42:49.415629 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.169) 0:00:23.414 **** 2025-09-04 00:42:49.415640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:49.415654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.415665 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.415676 | orchestrator | 2025-09-04 00:42:49.415687 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-04 00:42:49.415698 | orchestrator | Thursday 04 September 2025 00:42:43 +0000 (0:00:00.178) 0:00:23.593 **** 2025-09-04 00:42:49.415708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:49.415719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.415730 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.415741 | orchestrator | 2025-09-04 00:42:49.415752 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-04 00:42:49.415763 | orchestrator | Thursday 04 September 2025 00:42:44 +0000 (0:00:00.171) 0:00:23.764 **** 2025-09-04 00:42:49.415774 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:49.415786 | orchestrator | 2025-09-04 00:42:49.415796 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-04 00:42:49.415807 | orchestrator | Thursday 04 September 2025 00:42:44 +0000 (0:00:00.595) 0:00:24.359 **** 2025-09-04 00:42:49.415818 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:49.415829 | orchestrator | 2025-09-04 00:42:49.415839 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-04 00:42:49.415893 | orchestrator | Thursday 04 September 2025 00:42:45 +0000 (0:00:00.599) 0:00:24.958 **** 2025-09-04 00:42:49.415905 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:42:49.415916 | orchestrator | 2025-09-04 00:42:49.415927 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-04 00:42:49.415938 | orchestrator | Thursday 04 September 2025 00:42:45 +0000 (0:00:00.176) 0:00:25.135 **** 2025-09-04 00:42:49.415949 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'vg_name': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'}) 2025-09-04 00:42:49.415962 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'vg_name': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'}) 2025-09-04 00:42:49.415973 | orchestrator | 2025-09-04 00:42:49.416000 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-04 00:42:49.416012 | orchestrator | Thursday 04 September 2025 00:42:45 +0000 (0:00:00.167) 0:00:25.303 **** 2025-09-04 00:42:49.416024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:49.416066 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.416080 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.416093 | orchestrator | 2025-09-04 00:42:49.416105 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-04 00:42:49.416118 | orchestrator | Thursday 04 September 2025 00:42:46 +0000 (0:00:00.402) 0:00:25.705 **** 2025-09-04 00:42:49.416130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:49.416143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.416155 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.416167 | orchestrator | 2025-09-04 00:42:49.416180 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-04 00:42:49.416192 | orchestrator | Thursday 04 September 2025 00:42:46 +0000 (0:00:00.173) 0:00:25.878 **** 2025-09-04 00:42:49.416205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'})  2025-09-04 00:42:49.416217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'})  2025-09-04 00:42:49.416230 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:42:49.416243 | orchestrator | 2025-09-04 00:42:49.416255 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-04 00:42:49.416268 | orchestrator | Thursday 04 September 2025 00:42:46 +0000 (0:00:00.160) 0:00:26.039 **** 2025-09-04 00:42:49.416279 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 00:42:49.416292 | orchestrator |  "lvm_report": { 2025-09-04 00:42:49.416306 | orchestrator |  "lv": [ 2025-09-04 00:42:49.416317 | orchestrator |  { 2025-09-04 00:42:49.416345 | orchestrator |  "lv_name": "osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0", 2025-09-04 00:42:49.416358 | orchestrator |  "vg_name": "ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0" 2025-09-04 00:42:49.416368 | orchestrator |  }, 2025-09-04 00:42:49.416379 | orchestrator |  { 2025-09-04 00:42:49.416390 | orchestrator |  "lv_name": "osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302", 2025-09-04 00:42:49.416400 | orchestrator |  "vg_name": "ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302" 2025-09-04 00:42:49.416411 | orchestrator |  } 2025-09-04 00:42:49.416422 | orchestrator |  ], 2025-09-04 00:42:49.416432 | orchestrator |  "pv": [ 2025-09-04 00:42:49.416443 | orchestrator |  { 2025-09-04 00:42:49.416453 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-04 00:42:49.416464 | orchestrator |  "vg_name": "ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302" 2025-09-04 00:42:49.416475 | orchestrator |  }, 2025-09-04 00:42:49.416485 | orchestrator |  { 2025-09-04 00:42:49.416496 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-04 00:42:49.416507 | orchestrator |  "vg_name": "ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0" 2025-09-04 00:42:49.416517 | orchestrator |  } 2025-09-04 00:42:49.416528 | orchestrator |  ] 2025-09-04 00:42:49.416538 | orchestrator |  } 2025-09-04 00:42:49.416549 | orchestrator | } 2025-09-04 00:42:49.416560 | orchestrator | 2025-09-04 00:42:49.416571 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-04 00:42:49.416582 | orchestrator | 2025-09-04 00:42:49.416593 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:42:49.416603 | orchestrator | Thursday 04 September 2025 00:42:46 +0000 (0:00:00.286) 0:00:26.325 **** 2025-09-04 00:42:49.416614 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-04 00:42:49.416634 | orchestrator | 2025-09-04 00:42:49.416645 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:42:49.416655 | orchestrator | Thursday 04 September 2025 00:42:46 +0000 (0:00:00.269) 0:00:26.595 **** 2025-09-04 00:42:49.416666 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:42:49.416677 | orchestrator | 2025-09-04 00:42:49.416687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.416698 | orchestrator | Thursday 04 September 2025 00:42:47 +0000 (0:00:00.233) 0:00:26.829 **** 2025-09-04 00:42:49.416708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-04 00:42:49.416719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-04 00:42:49.416730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-04 00:42:49.416740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-04 00:42:49.416751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-04 00:42:49.416761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-04 00:42:49.416772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-04 00:42:49.416788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-04 00:42:49.416799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-04 00:42:49.416810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-04 00:42:49.416820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-04 00:42:49.416831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-04 00:42:49.416842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-04 00:42:49.416871 | orchestrator | 2025-09-04 00:42:49.416883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.416893 | orchestrator | Thursday 04 September 2025 00:42:47 +0000 (0:00:00.412) 0:00:27.241 **** 2025-09-04 00:42:49.416904 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.416915 | orchestrator | 2025-09-04 00:42:49.416925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.416936 | orchestrator | Thursday 04 September 2025 00:42:47 +0000 (0:00:00.201) 0:00:27.443 **** 2025-09-04 00:42:49.416947 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.416957 | orchestrator | 2025-09-04 00:42:49.416968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.416979 | orchestrator | Thursday 04 September 2025 00:42:47 +0000 (0:00:00.208) 0:00:27.652 **** 2025-09-04 00:42:49.416989 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.417000 | orchestrator | 2025-09-04 00:42:49.417010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.417021 | orchestrator | Thursday 04 September 2025 00:42:48 +0000 (0:00:00.604) 0:00:28.256 **** 2025-09-04 00:42:49.417031 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.417042 | orchestrator | 2025-09-04 00:42:49.417053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.417063 | orchestrator | Thursday 04 September 2025 00:42:48 +0000 (0:00:00.206) 0:00:28.462 **** 2025-09-04 00:42:49.417074 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.417085 | orchestrator | 2025-09-04 00:42:49.417095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.417106 | orchestrator | Thursday 04 September 2025 00:42:48 +0000 (0:00:00.183) 0:00:28.646 **** 2025-09-04 00:42:49.417117 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.417127 | orchestrator | 2025-09-04 00:42:49.417145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:42:49.417156 | orchestrator | Thursday 04 September 2025 00:42:49 +0000 (0:00:00.205) 0:00:28.851 **** 2025-09-04 00:42:49.417167 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:42:49.417178 | orchestrator | 2025-09-04 00:42:49.417196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.182704 | orchestrator | Thursday 04 September 2025 00:42:49 +0000 (0:00:00.217) 0:00:29.068 **** 2025-09-04 00:43:00.182792 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.182807 | orchestrator | 2025-09-04 00:43:00.182819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.182830 | orchestrator | Thursday 04 September 2025 00:42:49 +0000 (0:00:00.206) 0:00:29.274 **** 2025-09-04 00:43:00.182841 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d) 2025-09-04 00:43:00.182890 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d) 2025-09-04 00:43:00.182901 | orchestrator | 2025-09-04 00:43:00.182912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.182923 | orchestrator | Thursday 04 September 2025 00:42:50 +0000 (0:00:00.411) 0:00:29.686 **** 2025-09-04 00:43:00.182934 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f) 2025-09-04 00:43:00.182945 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f) 2025-09-04 00:43:00.182956 | orchestrator | 2025-09-04 00:43:00.182967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.182978 | orchestrator | Thursday 04 September 2025 00:42:50 +0000 (0:00:00.422) 0:00:30.108 **** 2025-09-04 00:43:00.182988 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6) 2025-09-04 00:43:00.182999 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6) 2025-09-04 00:43:00.183010 | orchestrator | 2025-09-04 00:43:00.183021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.183032 | orchestrator | Thursday 04 September 2025 00:42:50 +0000 (0:00:00.417) 0:00:30.526 **** 2025-09-04 00:43:00.183042 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77) 2025-09-04 00:43:00.183054 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77) 2025-09-04 00:43:00.183064 | orchestrator | 2025-09-04 00:43:00.183075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:00.183086 | orchestrator | Thursday 04 September 2025 00:42:51 +0000 (0:00:00.435) 0:00:30.962 **** 2025-09-04 00:43:00.183097 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:43:00.183107 | orchestrator | 2025-09-04 00:43:00.183118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183129 | orchestrator | Thursday 04 September 2025 00:42:51 +0000 (0:00:00.334) 0:00:31.297 **** 2025-09-04 00:43:00.183140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-04 00:43:00.183151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-04 00:43:00.183162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-04 00:43:00.183173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-04 00:43:00.183184 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-04 00:43:00.183194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-04 00:43:00.183218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-04 00:43:00.183250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-04 00:43:00.183264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-04 00:43:00.183276 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-04 00:43:00.183290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-04 00:43:00.183303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-04 00:43:00.183316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-04 00:43:00.183328 | orchestrator | 2025-09-04 00:43:00.183341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183355 | orchestrator | Thursday 04 September 2025 00:42:52 +0000 (0:00:00.671) 0:00:31.969 **** 2025-09-04 00:43:00.183368 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183381 | orchestrator | 2025-09-04 00:43:00.183394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183407 | orchestrator | Thursday 04 September 2025 00:42:52 +0000 (0:00:00.277) 0:00:32.246 **** 2025-09-04 00:43:00.183419 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183432 | orchestrator | 2025-09-04 00:43:00.183444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183457 | orchestrator | Thursday 04 September 2025 00:42:52 +0000 (0:00:00.201) 0:00:32.448 **** 2025-09-04 00:43:00.183470 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183482 | orchestrator | 2025-09-04 00:43:00.183495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183507 | orchestrator | Thursday 04 September 2025 00:42:53 +0000 (0:00:00.215) 0:00:32.663 **** 2025-09-04 00:43:00.183521 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183534 | orchestrator | 2025-09-04 00:43:00.183562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183576 | orchestrator | Thursday 04 September 2025 00:42:53 +0000 (0:00:00.188) 0:00:32.852 **** 2025-09-04 00:43:00.183587 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183597 | orchestrator | 2025-09-04 00:43:00.183608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183619 | orchestrator | Thursday 04 September 2025 00:42:53 +0000 (0:00:00.207) 0:00:33.060 **** 2025-09-04 00:43:00.183629 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183640 | orchestrator | 2025-09-04 00:43:00.183651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183661 | orchestrator | Thursday 04 September 2025 00:42:53 +0000 (0:00:00.217) 0:00:33.278 **** 2025-09-04 00:43:00.183672 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183682 | orchestrator | 2025-09-04 00:43:00.183693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183704 | orchestrator | Thursday 04 September 2025 00:42:53 +0000 (0:00:00.225) 0:00:33.503 **** 2025-09-04 00:43:00.183714 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183725 | orchestrator | 2025-09-04 00:43:00.183735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183746 | orchestrator | Thursday 04 September 2025 00:42:54 +0000 (0:00:00.258) 0:00:33.761 **** 2025-09-04 00:43:00.183757 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-04 00:43:00.183767 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-04 00:43:00.183779 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-04 00:43:00.183789 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-04 00:43:00.183800 | orchestrator | 2025-09-04 00:43:00.183811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183822 | orchestrator | Thursday 04 September 2025 00:42:54 +0000 (0:00:00.842) 0:00:34.604 **** 2025-09-04 00:43:00.183840 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183878 | orchestrator | 2025-09-04 00:43:00.183889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183900 | orchestrator | Thursday 04 September 2025 00:42:55 +0000 (0:00:00.213) 0:00:34.817 **** 2025-09-04 00:43:00.183911 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183921 | orchestrator | 2025-09-04 00:43:00.183932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183943 | orchestrator | Thursday 04 September 2025 00:42:55 +0000 (0:00:00.200) 0:00:35.018 **** 2025-09-04 00:43:00.183953 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.183964 | orchestrator | 2025-09-04 00:43:00.183975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:00.183985 | orchestrator | Thursday 04 September 2025 00:42:56 +0000 (0:00:00.650) 0:00:35.668 **** 2025-09-04 00:43:00.183996 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.184007 | orchestrator | 2025-09-04 00:43:00.184017 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-04 00:43:00.184028 | orchestrator | Thursday 04 September 2025 00:42:56 +0000 (0:00:00.221) 0:00:35.890 **** 2025-09-04 00:43:00.184044 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.184055 | orchestrator | 2025-09-04 00:43:00.184066 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-04 00:43:00.184077 | orchestrator | Thursday 04 September 2025 00:42:56 +0000 (0:00:00.151) 0:00:36.041 **** 2025-09-04 00:43:00.184087 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '23145802-20b8-57b1-85d3-97c3024bf9a4'}}) 2025-09-04 00:43:00.184098 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}}) 2025-09-04 00:43:00.184109 | orchestrator | 2025-09-04 00:43:00.184120 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-04 00:43:00.184130 | orchestrator | Thursday 04 September 2025 00:42:56 +0000 (0:00:00.236) 0:00:36.278 **** 2025-09-04 00:43:00.184142 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'}) 2025-09-04 00:43:00.184154 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}) 2025-09-04 00:43:00.184165 | orchestrator | 2025-09-04 00:43:00.184175 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-04 00:43:00.184186 | orchestrator | Thursday 04 September 2025 00:42:58 +0000 (0:00:01.904) 0:00:38.182 **** 2025-09-04 00:43:00.184197 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:00.184209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:00.184219 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:00.184230 | orchestrator | 2025-09-04 00:43:00.184241 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-04 00:43:00.184251 | orchestrator | Thursday 04 September 2025 00:42:58 +0000 (0:00:00.190) 0:00:38.373 **** 2025-09-04 00:43:00.184262 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'}) 2025-09-04 00:43:00.184274 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}) 2025-09-04 00:43:00.184284 | orchestrator | 2025-09-04 00:43:00.184302 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-04 00:43:05.958106 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:01.456) 0:00:39.830 **** 2025-09-04 00:43:05.958217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958245 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958257 | orchestrator | 2025-09-04 00:43:05.958269 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-04 00:43:05.958280 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:00.156) 0:00:39.987 **** 2025-09-04 00:43:05.958291 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958301 | orchestrator | 2025-09-04 00:43:05.958312 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-04 00:43:05.958323 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:00.148) 0:00:40.135 **** 2025-09-04 00:43:05.958334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958355 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958366 | orchestrator | 2025-09-04 00:43:05.958376 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-04 00:43:05.958387 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:00.147) 0:00:40.282 **** 2025-09-04 00:43:05.958397 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958408 | orchestrator | 2025-09-04 00:43:05.958418 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-04 00:43:05.958429 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:00.139) 0:00:40.421 **** 2025-09-04 00:43:05.958440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958461 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958472 | orchestrator | 2025-09-04 00:43:05.958483 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-04 00:43:05.958494 | orchestrator | Thursday 04 September 2025 00:43:00 +0000 (0:00:00.158) 0:00:40.580 **** 2025-09-04 00:43:05.958513 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958525 | orchestrator | 2025-09-04 00:43:05.958535 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-04 00:43:05.958546 | orchestrator | Thursday 04 September 2025 00:43:01 +0000 (0:00:00.353) 0:00:40.933 **** 2025-09-04 00:43:05.958557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958578 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958589 | orchestrator | 2025-09-04 00:43:05.958600 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-04 00:43:05.958610 | orchestrator | Thursday 04 September 2025 00:43:01 +0000 (0:00:00.161) 0:00:41.095 **** 2025-09-04 00:43:05.958621 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:05.958634 | orchestrator | 2025-09-04 00:43:05.958648 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-04 00:43:05.958661 | orchestrator | Thursday 04 September 2025 00:43:01 +0000 (0:00:00.151) 0:00:41.247 **** 2025-09-04 00:43:05.958682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958708 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958721 | orchestrator | 2025-09-04 00:43:05.958734 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-04 00:43:05.958747 | orchestrator | Thursday 04 September 2025 00:43:01 +0000 (0:00:00.156) 0:00:41.403 **** 2025-09-04 00:43:05.958760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958786 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958799 | orchestrator | 2025-09-04 00:43:05.958812 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-04 00:43:05.958826 | orchestrator | Thursday 04 September 2025 00:43:01 +0000 (0:00:00.161) 0:00:41.565 **** 2025-09-04 00:43:05.958872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:05.958887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:05.958900 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958912 | orchestrator | 2025-09-04 00:43:05.958925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-04 00:43:05.958937 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.195) 0:00:41.760 **** 2025-09-04 00:43:05.958950 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.958962 | orchestrator | 2025-09-04 00:43:05.958975 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-04 00:43:05.958988 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.224) 0:00:41.985 **** 2025-09-04 00:43:05.958998 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959009 | orchestrator | 2025-09-04 00:43:05.959020 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-04 00:43:05.959030 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.128) 0:00:42.114 **** 2025-09-04 00:43:05.959041 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959051 | orchestrator | 2025-09-04 00:43:05.959062 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-04 00:43:05.959073 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.133) 0:00:42.247 **** 2025-09-04 00:43:05.959083 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:43:05.959094 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-04 00:43:05.959105 | orchestrator | } 2025-09-04 00:43:05.959116 | orchestrator | 2025-09-04 00:43:05.959126 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-04 00:43:05.959137 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.132) 0:00:42.379 **** 2025-09-04 00:43:05.959148 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:43:05.959158 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-04 00:43:05.959169 | orchestrator | } 2025-09-04 00:43:05.959179 | orchestrator | 2025-09-04 00:43:05.959190 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-04 00:43:05.959200 | orchestrator | Thursday 04 September 2025 00:43:02 +0000 (0:00:00.194) 0:00:42.574 **** 2025-09-04 00:43:05.959211 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:43:05.959221 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-04 00:43:05.959238 | orchestrator | } 2025-09-04 00:43:05.959249 | orchestrator | 2025-09-04 00:43:05.959260 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-04 00:43:05.959270 | orchestrator | Thursday 04 September 2025 00:43:03 +0000 (0:00:00.134) 0:00:42.708 **** 2025-09-04 00:43:05.959281 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:05.959291 | orchestrator | 2025-09-04 00:43:05.959302 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-04 00:43:05.959313 | orchestrator | Thursday 04 September 2025 00:43:03 +0000 (0:00:00.733) 0:00:43.442 **** 2025-09-04 00:43:05.959324 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:05.959334 | orchestrator | 2025-09-04 00:43:05.959345 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-04 00:43:05.959355 | orchestrator | Thursday 04 September 2025 00:43:04 +0000 (0:00:00.528) 0:00:43.970 **** 2025-09-04 00:43:05.959366 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:05.959377 | orchestrator | 2025-09-04 00:43:05.959387 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-04 00:43:05.959398 | orchestrator | Thursday 04 September 2025 00:43:04 +0000 (0:00:00.522) 0:00:44.493 **** 2025-09-04 00:43:05.959408 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:05.959419 | orchestrator | 2025-09-04 00:43:05.959429 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-04 00:43:05.959440 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.178) 0:00:44.671 **** 2025-09-04 00:43:05.959451 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959461 | orchestrator | 2025-09-04 00:43:05.959472 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-04 00:43:05.959482 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.120) 0:00:44.791 **** 2025-09-04 00:43:05.959499 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959510 | orchestrator | 2025-09-04 00:43:05.959521 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-04 00:43:05.959531 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.111) 0:00:44.903 **** 2025-09-04 00:43:05.959542 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:43:05.959553 | orchestrator |  "vgs_report": { 2025-09-04 00:43:05.959563 | orchestrator |  "vg": [] 2025-09-04 00:43:05.959574 | orchestrator |  } 2025-09-04 00:43:05.959585 | orchestrator | } 2025-09-04 00:43:05.959595 | orchestrator | 2025-09-04 00:43:05.959606 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-04 00:43:05.959617 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.148) 0:00:45.051 **** 2025-09-04 00:43:05.959627 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959638 | orchestrator | 2025-09-04 00:43:05.959648 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-04 00:43:05.959659 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.136) 0:00:45.188 **** 2025-09-04 00:43:05.959670 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959680 | orchestrator | 2025-09-04 00:43:05.959691 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-04 00:43:05.959701 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.134) 0:00:45.322 **** 2025-09-04 00:43:05.959712 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959723 | orchestrator | 2025-09-04 00:43:05.959733 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-04 00:43:05.959744 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.144) 0:00:45.466 **** 2025-09-04 00:43:05.959754 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:05.959765 | orchestrator | 2025-09-04 00:43:05.959776 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-04 00:43:05.959793 | orchestrator | Thursday 04 September 2025 00:43:05 +0000 (0:00:00.138) 0:00:45.605 **** 2025-09-04 00:43:10.763515 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763602 | orchestrator | 2025-09-04 00:43:10.763641 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-04 00:43:10.763654 | orchestrator | Thursday 04 September 2025 00:43:06 +0000 (0:00:00.147) 0:00:45.752 **** 2025-09-04 00:43:10.763665 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763676 | orchestrator | 2025-09-04 00:43:10.763686 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-04 00:43:10.763697 | orchestrator | Thursday 04 September 2025 00:43:06 +0000 (0:00:00.351) 0:00:46.104 **** 2025-09-04 00:43:10.763708 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763718 | orchestrator | 2025-09-04 00:43:10.763729 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-04 00:43:10.763739 | orchestrator | Thursday 04 September 2025 00:43:06 +0000 (0:00:00.143) 0:00:46.247 **** 2025-09-04 00:43:10.763750 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763760 | orchestrator | 2025-09-04 00:43:10.763771 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-04 00:43:10.763781 | orchestrator | Thursday 04 September 2025 00:43:06 +0000 (0:00:00.159) 0:00:46.406 **** 2025-09-04 00:43:10.763792 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763802 | orchestrator | 2025-09-04 00:43:10.763813 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-04 00:43:10.763823 | orchestrator | Thursday 04 September 2025 00:43:06 +0000 (0:00:00.136) 0:00:46.543 **** 2025-09-04 00:43:10.763834 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763902 | orchestrator | 2025-09-04 00:43:10.763913 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-04 00:43:10.763924 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.148) 0:00:46.691 **** 2025-09-04 00:43:10.763935 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763945 | orchestrator | 2025-09-04 00:43:10.763956 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-04 00:43:10.763967 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.128) 0:00:46.820 **** 2025-09-04 00:43:10.763977 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.763987 | orchestrator | 2025-09-04 00:43:10.763998 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-04 00:43:10.764009 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.136) 0:00:46.957 **** 2025-09-04 00:43:10.764019 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764030 | orchestrator | 2025-09-04 00:43:10.764040 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-04 00:43:10.764050 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.138) 0:00:47.095 **** 2025-09-04 00:43:10.764061 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764071 | orchestrator | 2025-09-04 00:43:10.764085 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-04 00:43:10.764097 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.142) 0:00:47.237 **** 2025-09-04 00:43:10.764122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764148 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764162 | orchestrator | 2025-09-04 00:43:10.764174 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-04 00:43:10.764187 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.163) 0:00:47.400 **** 2025-09-04 00:43:10.764199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764232 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764245 | orchestrator | 2025-09-04 00:43:10.764257 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-04 00:43:10.764269 | orchestrator | Thursday 04 September 2025 00:43:07 +0000 (0:00:00.165) 0:00:47.566 **** 2025-09-04 00:43:10.764282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764307 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764320 | orchestrator | 2025-09-04 00:43:10.764333 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-04 00:43:10.764343 | orchestrator | Thursday 04 September 2025 00:43:08 +0000 (0:00:00.150) 0:00:47.716 **** 2025-09-04 00:43:10.764354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764375 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764386 | orchestrator | 2025-09-04 00:43:10.764397 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-04 00:43:10.764423 | orchestrator | Thursday 04 September 2025 00:43:08 +0000 (0:00:00.346) 0:00:48.063 **** 2025-09-04 00:43:10.764435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764456 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764466 | orchestrator | 2025-09-04 00:43:10.764477 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-04 00:43:10.764487 | orchestrator | Thursday 04 September 2025 00:43:08 +0000 (0:00:00.168) 0:00:48.232 **** 2025-09-04 00:43:10.764498 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764519 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764530 | orchestrator | 2025-09-04 00:43:10.764541 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-04 00:43:10.764552 | orchestrator | Thursday 04 September 2025 00:43:08 +0000 (0:00:00.161) 0:00:48.394 **** 2025-09-04 00:43:10.764562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764584 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764595 | orchestrator | 2025-09-04 00:43:10.764605 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-04 00:43:10.764616 | orchestrator | Thursday 04 September 2025 00:43:08 +0000 (0:00:00.163) 0:00:48.557 **** 2025-09-04 00:43:10.764626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764654 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764665 | orchestrator | 2025-09-04 00:43:10.764680 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-04 00:43:10.764691 | orchestrator | Thursday 04 September 2025 00:43:09 +0000 (0:00:00.157) 0:00:48.715 **** 2025-09-04 00:43:10.764702 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:10.764713 | orchestrator | 2025-09-04 00:43:10.764723 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-04 00:43:10.764734 | orchestrator | Thursday 04 September 2025 00:43:09 +0000 (0:00:00.525) 0:00:49.241 **** 2025-09-04 00:43:10.764744 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:10.764755 | orchestrator | 2025-09-04 00:43:10.764765 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-04 00:43:10.764776 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.540) 0:00:49.781 **** 2025-09-04 00:43:10.764787 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:43:10.764797 | orchestrator | 2025-09-04 00:43:10.764808 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-04 00:43:10.764819 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.146) 0:00:49.928 **** 2025-09-04 00:43:10.764830 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'vg_name': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'}) 2025-09-04 00:43:10.764858 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'vg_name': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}) 2025-09-04 00:43:10.764869 | orchestrator | 2025-09-04 00:43:10.764880 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-04 00:43:10.764891 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.180) 0:00:50.109 **** 2025-09-04 00:43:10.764901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764923 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:10.764934 | orchestrator | 2025-09-04 00:43:10.764944 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-04 00:43:10.764955 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.155) 0:00:50.264 **** 2025-09-04 00:43:10.764966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:10.764977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:10.764995 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:16.932723 | orchestrator | 2025-09-04 00:43:16.932881 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-04 00:43:16.932902 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.150) 0:00:50.415 **** 2025-09-04 00:43:16.932915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'})  2025-09-04 00:43:16.932928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'})  2025-09-04 00:43:16.932940 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:43:16.932952 | orchestrator | 2025-09-04 00:43:16.932963 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-04 00:43:16.932974 | orchestrator | Thursday 04 September 2025 00:43:10 +0000 (0:00:00.169) 0:00:50.585 **** 2025-09-04 00:43:16.933005 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 00:43:16.933017 | orchestrator |  "lvm_report": { 2025-09-04 00:43:16.933029 | orchestrator |  "lv": [ 2025-09-04 00:43:16.933040 | orchestrator |  { 2025-09-04 00:43:16.933051 | orchestrator |  "lv_name": "osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4", 2025-09-04 00:43:16.933062 | orchestrator |  "vg_name": "ceph-23145802-20b8-57b1-85d3-97c3024bf9a4" 2025-09-04 00:43:16.933073 | orchestrator |  }, 2025-09-04 00:43:16.933084 | orchestrator |  { 2025-09-04 00:43:16.933095 | orchestrator |  "lv_name": "osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b", 2025-09-04 00:43:16.933105 | orchestrator |  "vg_name": "ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b" 2025-09-04 00:43:16.933116 | orchestrator |  } 2025-09-04 00:43:16.933127 | orchestrator |  ], 2025-09-04 00:43:16.933137 | orchestrator |  "pv": [ 2025-09-04 00:43:16.933148 | orchestrator |  { 2025-09-04 00:43:16.933159 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-04 00:43:16.933170 | orchestrator |  "vg_name": "ceph-23145802-20b8-57b1-85d3-97c3024bf9a4" 2025-09-04 00:43:16.933180 | orchestrator |  }, 2025-09-04 00:43:16.933191 | orchestrator |  { 2025-09-04 00:43:16.933201 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-04 00:43:16.933212 | orchestrator |  "vg_name": "ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b" 2025-09-04 00:43:16.933223 | orchestrator |  } 2025-09-04 00:43:16.933331 | orchestrator |  ] 2025-09-04 00:43:16.933346 | orchestrator |  } 2025-09-04 00:43:16.933360 | orchestrator | } 2025-09-04 00:43:16.933372 | orchestrator | 2025-09-04 00:43:16.933385 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-04 00:43:16.933398 | orchestrator | 2025-09-04 00:43:16.933411 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-04 00:43:16.933423 | orchestrator | Thursday 04 September 2025 00:43:11 +0000 (0:00:00.519) 0:00:51.104 **** 2025-09-04 00:43:16.933440 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-04 00:43:16.933459 | orchestrator | 2025-09-04 00:43:16.933479 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-04 00:43:16.933499 | orchestrator | Thursday 04 September 2025 00:43:11 +0000 (0:00:00.258) 0:00:51.363 **** 2025-09-04 00:43:16.933517 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:16.933536 | orchestrator | 2025-09-04 00:43:16.933549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933562 | orchestrator | Thursday 04 September 2025 00:43:11 +0000 (0:00:00.258) 0:00:51.622 **** 2025-09-04 00:43:16.933575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-04 00:43:16.933588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-04 00:43:16.933599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-04 00:43:16.933613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-04 00:43:16.933625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-04 00:43:16.933636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-04 00:43:16.933647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-04 00:43:16.933658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-04 00:43:16.933668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-04 00:43:16.933679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-04 00:43:16.933689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-04 00:43:16.933711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-04 00:43:16.933721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-04 00:43:16.933732 | orchestrator | 2025-09-04 00:43:16.933743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933754 | orchestrator | Thursday 04 September 2025 00:43:12 +0000 (0:00:00.432) 0:00:52.055 **** 2025-09-04 00:43:16.933764 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.933778 | orchestrator | 2025-09-04 00:43:16.933789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933800 | orchestrator | Thursday 04 September 2025 00:43:12 +0000 (0:00:00.195) 0:00:52.250 **** 2025-09-04 00:43:16.933810 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.933821 | orchestrator | 2025-09-04 00:43:16.933832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933881 | orchestrator | Thursday 04 September 2025 00:43:12 +0000 (0:00:00.198) 0:00:52.448 **** 2025-09-04 00:43:16.933893 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.933904 | orchestrator | 2025-09-04 00:43:16.933915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933925 | orchestrator | Thursday 04 September 2025 00:43:13 +0000 (0:00:00.224) 0:00:52.672 **** 2025-09-04 00:43:16.933936 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.933946 | orchestrator | 2025-09-04 00:43:16.933957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.933968 | orchestrator | Thursday 04 September 2025 00:43:13 +0000 (0:00:00.200) 0:00:52.873 **** 2025-09-04 00:43:16.933979 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.933990 | orchestrator | 2025-09-04 00:43:16.934097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934114 | orchestrator | Thursday 04 September 2025 00:43:13 +0000 (0:00:00.186) 0:00:53.060 **** 2025-09-04 00:43:16.934125 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.934136 | orchestrator | 2025-09-04 00:43:16.934146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934157 | orchestrator | Thursday 04 September 2025 00:43:13 +0000 (0:00:00.578) 0:00:53.639 **** 2025-09-04 00:43:16.934168 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.934178 | orchestrator | 2025-09-04 00:43:16.934189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934200 | orchestrator | Thursday 04 September 2025 00:43:14 +0000 (0:00:00.205) 0:00:53.845 **** 2025-09-04 00:43:16.934211 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:16.934221 | orchestrator | 2025-09-04 00:43:16.934232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934243 | orchestrator | Thursday 04 September 2025 00:43:14 +0000 (0:00:00.210) 0:00:54.056 **** 2025-09-04 00:43:16.934253 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb) 2025-09-04 00:43:16.934265 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb) 2025-09-04 00:43:16.934276 | orchestrator | 2025-09-04 00:43:16.934286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934297 | orchestrator | Thursday 04 September 2025 00:43:14 +0000 (0:00:00.455) 0:00:54.511 **** 2025-09-04 00:43:16.934307 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1) 2025-09-04 00:43:16.934318 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1) 2025-09-04 00:43:16.934329 | orchestrator | 2025-09-04 00:43:16.934339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934350 | orchestrator | Thursday 04 September 2025 00:43:15 +0000 (0:00:00.419) 0:00:54.931 **** 2025-09-04 00:43:16.934372 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a) 2025-09-04 00:43:16.934384 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a) 2025-09-04 00:43:16.934394 | orchestrator | 2025-09-04 00:43:16.934405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934415 | orchestrator | Thursday 04 September 2025 00:43:15 +0000 (0:00:00.431) 0:00:55.363 **** 2025-09-04 00:43:16.934426 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e) 2025-09-04 00:43:16.934437 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e) 2025-09-04 00:43:16.934447 | orchestrator | 2025-09-04 00:43:16.934458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-04 00:43:16.934468 | orchestrator | Thursday 04 September 2025 00:43:16 +0000 (0:00:00.444) 0:00:55.807 **** 2025-09-04 00:43:16.934479 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-04 00:43:16.934490 | orchestrator | 2025-09-04 00:43:16.934500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:16.934511 | orchestrator | Thursday 04 September 2025 00:43:16 +0000 (0:00:00.355) 0:00:56.163 **** 2025-09-04 00:43:16.934521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-04 00:43:16.934532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-04 00:43:16.934542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-04 00:43:16.934553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-04 00:43:16.934564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-04 00:43:16.934574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-04 00:43:16.934584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-04 00:43:16.934595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-04 00:43:16.934605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-04 00:43:16.934616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-04 00:43:16.934626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-04 00:43:16.934646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-04 00:43:26.140089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-04 00:43:26.140210 | orchestrator | 2025-09-04 00:43:26.140234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140252 | orchestrator | Thursday 04 September 2025 00:43:16 +0000 (0:00:00.409) 0:00:56.573 **** 2025-09-04 00:43:26.140270 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140287 | orchestrator | 2025-09-04 00:43:26.140304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140322 | orchestrator | Thursday 04 September 2025 00:43:17 +0000 (0:00:00.194) 0:00:56.768 **** 2025-09-04 00:43:26.140338 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140355 | orchestrator | 2025-09-04 00:43:26.140371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140387 | orchestrator | Thursday 04 September 2025 00:43:17 +0000 (0:00:00.210) 0:00:56.978 **** 2025-09-04 00:43:26.140403 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140420 | orchestrator | 2025-09-04 00:43:26.140436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140483 | orchestrator | Thursday 04 September 2025 00:43:17 +0000 (0:00:00.649) 0:00:57.627 **** 2025-09-04 00:43:26.140502 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140521 | orchestrator | 2025-09-04 00:43:26.140540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140561 | orchestrator | Thursday 04 September 2025 00:43:18 +0000 (0:00:00.209) 0:00:57.837 **** 2025-09-04 00:43:26.140582 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140603 | orchestrator | 2025-09-04 00:43:26.140624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140646 | orchestrator | Thursday 04 September 2025 00:43:18 +0000 (0:00:00.204) 0:00:58.041 **** 2025-09-04 00:43:26.140669 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140690 | orchestrator | 2025-09-04 00:43:26.140711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140724 | orchestrator | Thursday 04 September 2025 00:43:18 +0000 (0:00:00.233) 0:00:58.275 **** 2025-09-04 00:43:26.140738 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140750 | orchestrator | 2025-09-04 00:43:26.140763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140776 | orchestrator | Thursday 04 September 2025 00:43:18 +0000 (0:00:00.213) 0:00:58.488 **** 2025-09-04 00:43:26.140789 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140801 | orchestrator | 2025-09-04 00:43:26.140814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140827 | orchestrator | Thursday 04 September 2025 00:43:19 +0000 (0:00:00.202) 0:00:58.691 **** 2025-09-04 00:43:26.140866 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-04 00:43:26.140880 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-04 00:43:26.140906 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-04 00:43:26.140919 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-04 00:43:26.140931 | orchestrator | 2025-09-04 00:43:26.140943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.140955 | orchestrator | Thursday 04 September 2025 00:43:19 +0000 (0:00:00.666) 0:00:59.357 **** 2025-09-04 00:43:26.140968 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.140981 | orchestrator | 2025-09-04 00:43:26.140994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.141005 | orchestrator | Thursday 04 September 2025 00:43:19 +0000 (0:00:00.196) 0:00:59.554 **** 2025-09-04 00:43:26.141016 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141026 | orchestrator | 2025-09-04 00:43:26.141037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.141048 | orchestrator | Thursday 04 September 2025 00:43:20 +0000 (0:00:00.205) 0:00:59.759 **** 2025-09-04 00:43:26.141059 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141070 | orchestrator | 2025-09-04 00:43:26.141080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-04 00:43:26.141091 | orchestrator | Thursday 04 September 2025 00:43:20 +0000 (0:00:00.194) 0:00:59.953 **** 2025-09-04 00:43:26.141102 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141112 | orchestrator | 2025-09-04 00:43:26.141123 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-04 00:43:26.141133 | orchestrator | Thursday 04 September 2025 00:43:20 +0000 (0:00:00.187) 0:01:00.141 **** 2025-09-04 00:43:26.141144 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141155 | orchestrator | 2025-09-04 00:43:26.141165 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-04 00:43:26.141176 | orchestrator | Thursday 04 September 2025 00:43:20 +0000 (0:00:00.353) 0:01:00.495 **** 2025-09-04 00:43:26.141186 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4210a630-ee04-544c-a974-f1a5ed1af07b'}}) 2025-09-04 00:43:26.141198 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f811dc1d-ef4c-5806-9269-19449158751d'}}) 2025-09-04 00:43:26.141222 | orchestrator | 2025-09-04 00:43:26.141233 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-04 00:43:26.141244 | orchestrator | Thursday 04 September 2025 00:43:21 +0000 (0:00:00.207) 0:01:00.703 **** 2025-09-04 00:43:26.141255 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'}) 2025-09-04 00:43:26.141267 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'}) 2025-09-04 00:43:26.141278 | orchestrator | 2025-09-04 00:43:26.141289 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-04 00:43:26.141319 | orchestrator | Thursday 04 September 2025 00:43:22 +0000 (0:00:01.907) 0:01:02.610 **** 2025-09-04 00:43:26.141331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:26.141343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:26.141354 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141365 | orchestrator | 2025-09-04 00:43:26.141375 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-04 00:43:26.141386 | orchestrator | Thursday 04 September 2025 00:43:23 +0000 (0:00:00.143) 0:01:02.754 **** 2025-09-04 00:43:26.141397 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'}) 2025-09-04 00:43:26.141408 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'}) 2025-09-04 00:43:26.141419 | orchestrator | 2025-09-04 00:43:26.141430 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-04 00:43:26.141441 | orchestrator | Thursday 04 September 2025 00:43:24 +0000 (0:00:01.408) 0:01:04.163 **** 2025-09-04 00:43:26.141452 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:26.141463 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:26.141474 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141485 | orchestrator | 2025-09-04 00:43:26.141496 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-04 00:43:26.141506 | orchestrator | Thursday 04 September 2025 00:43:24 +0000 (0:00:00.200) 0:01:04.363 **** 2025-09-04 00:43:26.141517 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141528 | orchestrator | 2025-09-04 00:43:26.141538 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-04 00:43:26.141549 | orchestrator | Thursday 04 September 2025 00:43:24 +0000 (0:00:00.144) 0:01:04.507 **** 2025-09-04 00:43:26.141560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:26.141575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:26.141587 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141597 | orchestrator | 2025-09-04 00:43:26.141608 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-04 00:43:26.141622 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.159) 0:01:04.667 **** 2025-09-04 00:43:26.141642 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141671 | orchestrator | 2025-09-04 00:43:26.141691 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-04 00:43:26.141711 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.137) 0:01:04.804 **** 2025-09-04 00:43:26.141729 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:26.141743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:26.141753 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141764 | orchestrator | 2025-09-04 00:43:26.141775 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-04 00:43:26.141785 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.181) 0:01:04.986 **** 2025-09-04 00:43:26.141796 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141806 | orchestrator | 2025-09-04 00:43:26.141817 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-04 00:43:26.141827 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.139) 0:01:05.125 **** 2025-09-04 00:43:26.141872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:26.141883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:26.141894 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:26.141905 | orchestrator | 2025-09-04 00:43:26.141915 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-04 00:43:26.141926 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.159) 0:01:05.285 **** 2025-09-04 00:43:26.141937 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:26.141948 | orchestrator | 2025-09-04 00:43:26.141959 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-04 00:43:26.141969 | orchestrator | Thursday 04 September 2025 00:43:25 +0000 (0:00:00.347) 0:01:05.632 **** 2025-09-04 00:43:26.141989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:32.352915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:32.353010 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353026 | orchestrator | 2025-09-04 00:43:32.353039 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-04 00:43:32.353051 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.160) 0:01:05.792 **** 2025-09-04 00:43:32.353062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:32.353073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:32.353084 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353095 | orchestrator | 2025-09-04 00:43:32.353106 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-04 00:43:32.353117 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.168) 0:01:05.960 **** 2025-09-04 00:43:32.353128 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:32.353139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:32.353150 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353183 | orchestrator | 2025-09-04 00:43:32.353194 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-04 00:43:32.353205 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.169) 0:01:06.130 **** 2025-09-04 00:43:32.353215 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353226 | orchestrator | 2025-09-04 00:43:32.353236 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-04 00:43:32.353247 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.151) 0:01:06.282 **** 2025-09-04 00:43:32.353258 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353268 | orchestrator | 2025-09-04 00:43:32.353279 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-04 00:43:32.353289 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.131) 0:01:06.414 **** 2025-09-04 00:43:32.353300 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353310 | orchestrator | 2025-09-04 00:43:32.353321 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-04 00:43:32.353331 | orchestrator | Thursday 04 September 2025 00:43:26 +0000 (0:00:00.133) 0:01:06.547 **** 2025-09-04 00:43:32.353342 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:43:32.353353 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-04 00:43:32.353364 | orchestrator | } 2025-09-04 00:43:32.353375 | orchestrator | 2025-09-04 00:43:32.353385 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-04 00:43:32.353396 | orchestrator | Thursday 04 September 2025 00:43:27 +0000 (0:00:00.157) 0:01:06.705 **** 2025-09-04 00:43:32.353407 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:43:32.353417 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-04 00:43:32.353428 | orchestrator | } 2025-09-04 00:43:32.353439 | orchestrator | 2025-09-04 00:43:32.353449 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-04 00:43:32.353460 | orchestrator | Thursday 04 September 2025 00:43:27 +0000 (0:00:00.156) 0:01:06.861 **** 2025-09-04 00:43:32.353471 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:43:32.353481 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-04 00:43:32.353492 | orchestrator | } 2025-09-04 00:43:32.353503 | orchestrator | 2025-09-04 00:43:32.353514 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-04 00:43:32.353525 | orchestrator | Thursday 04 September 2025 00:43:27 +0000 (0:00:00.156) 0:01:07.018 **** 2025-09-04 00:43:32.353536 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:32.353546 | orchestrator | 2025-09-04 00:43:32.353557 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-04 00:43:32.353568 | orchestrator | Thursday 04 September 2025 00:43:27 +0000 (0:00:00.510) 0:01:07.528 **** 2025-09-04 00:43:32.353578 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:32.353589 | orchestrator | 2025-09-04 00:43:32.353599 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-04 00:43:32.353610 | orchestrator | Thursday 04 September 2025 00:43:28 +0000 (0:00:00.514) 0:01:08.043 **** 2025-09-04 00:43:32.353621 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:32.353631 | orchestrator | 2025-09-04 00:43:32.353642 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-04 00:43:32.353652 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.736) 0:01:08.780 **** 2025-09-04 00:43:32.353663 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:32.353674 | orchestrator | 2025-09-04 00:43:32.353684 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-04 00:43:32.353695 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.140) 0:01:08.920 **** 2025-09-04 00:43:32.353706 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353716 | orchestrator | 2025-09-04 00:43:32.353727 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-04 00:43:32.353738 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.111) 0:01:09.032 **** 2025-09-04 00:43:32.353755 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353766 | orchestrator | 2025-09-04 00:43:32.353777 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-04 00:43:32.353787 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.128) 0:01:09.161 **** 2025-09-04 00:43:32.353798 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:43:32.353824 | orchestrator |  "vgs_report": { 2025-09-04 00:43:32.353853 | orchestrator |  "vg": [] 2025-09-04 00:43:32.353880 | orchestrator |  } 2025-09-04 00:43:32.353891 | orchestrator | } 2025-09-04 00:43:32.353902 | orchestrator | 2025-09-04 00:43:32.353912 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-04 00:43:32.353923 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.140) 0:01:09.301 **** 2025-09-04 00:43:32.353933 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.353944 | orchestrator | 2025-09-04 00:43:32.353955 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-04 00:43:32.353978 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.143) 0:01:09.445 **** 2025-09-04 00:43:32.353999 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354010 | orchestrator | 2025-09-04 00:43:32.354120 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-04 00:43:32.354132 | orchestrator | Thursday 04 September 2025 00:43:29 +0000 (0:00:00.147) 0:01:09.592 **** 2025-09-04 00:43:32.354143 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354154 | orchestrator | 2025-09-04 00:43:32.354165 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-04 00:43:32.354175 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.138) 0:01:09.731 **** 2025-09-04 00:43:32.354186 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354197 | orchestrator | 2025-09-04 00:43:32.354208 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-04 00:43:32.354218 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.148) 0:01:09.879 **** 2025-09-04 00:43:32.354229 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354239 | orchestrator | 2025-09-04 00:43:32.354250 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-04 00:43:32.354261 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.133) 0:01:10.013 **** 2025-09-04 00:43:32.354271 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354282 | orchestrator | 2025-09-04 00:43:32.354293 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-04 00:43:32.354303 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.129) 0:01:10.142 **** 2025-09-04 00:43:32.354314 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354324 | orchestrator | 2025-09-04 00:43:32.354335 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-04 00:43:32.354345 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.143) 0:01:10.286 **** 2025-09-04 00:43:32.354356 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354367 | orchestrator | 2025-09-04 00:43:32.354377 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-04 00:43:32.354388 | orchestrator | Thursday 04 September 2025 00:43:30 +0000 (0:00:00.141) 0:01:10.428 **** 2025-09-04 00:43:32.354398 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354409 | orchestrator | 2025-09-04 00:43:32.354419 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-04 00:43:32.354436 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.358) 0:01:10.787 **** 2025-09-04 00:43:32.354447 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354458 | orchestrator | 2025-09-04 00:43:32.354468 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-04 00:43:32.354479 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.139) 0:01:10.926 **** 2025-09-04 00:43:32.354490 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354518 | orchestrator | 2025-09-04 00:43:32.354540 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-04 00:43:32.354561 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.139) 0:01:11.066 **** 2025-09-04 00:43:32.354573 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354584 | orchestrator | 2025-09-04 00:43:32.354594 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-04 00:43:32.354605 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.147) 0:01:11.214 **** 2025-09-04 00:43:32.354616 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354626 | orchestrator | 2025-09-04 00:43:32.354637 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-04 00:43:32.354647 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.159) 0:01:11.374 **** 2025-09-04 00:43:32.354658 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354668 | orchestrator | 2025-09-04 00:43:32.354679 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-04 00:43:32.354690 | orchestrator | Thursday 04 September 2025 00:43:31 +0000 (0:00:00.148) 0:01:11.522 **** 2025-09-04 00:43:32.354700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:32.354711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:32.354722 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354733 | orchestrator | 2025-09-04 00:43:32.354743 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-04 00:43:32.354754 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.162) 0:01:11.684 **** 2025-09-04 00:43:32.354765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:32.354775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:32.354786 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:32.354797 | orchestrator | 2025-09-04 00:43:32.354807 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-04 00:43:32.354818 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.161) 0:01:11.846 **** 2025-09-04 00:43:32.354855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.503942 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504024 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504039 | orchestrator | 2025-09-04 00:43:35.504052 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-04 00:43:35.504063 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.159) 0:01:12.006 **** 2025-09-04 00:43:35.504074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504096 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504107 | orchestrator | 2025-09-04 00:43:35.504118 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-04 00:43:35.504129 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.218) 0:01:12.224 **** 2025-09-04 00:43:35.504139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504182 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504192 | orchestrator | 2025-09-04 00:43:35.504203 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-04 00:43:35.504214 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.175) 0:01:12.400 **** 2025-09-04 00:43:35.504224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504246 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504257 | orchestrator | 2025-09-04 00:43:35.504278 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-04 00:43:35.504289 | orchestrator | Thursday 04 September 2025 00:43:32 +0000 (0:00:00.160) 0:01:12.561 **** 2025-09-04 00:43:35.504300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504311 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504322 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504333 | orchestrator | 2025-09-04 00:43:35.504343 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-04 00:43:35.504355 | orchestrator | Thursday 04 September 2025 00:43:33 +0000 (0:00:00.365) 0:01:12.926 **** 2025-09-04 00:43:35.504366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504387 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504398 | orchestrator | 2025-09-04 00:43:35.504408 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-04 00:43:35.504419 | orchestrator | Thursday 04 September 2025 00:43:33 +0000 (0:00:00.160) 0:01:13.087 **** 2025-09-04 00:43:35.504430 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:35.504441 | orchestrator | 2025-09-04 00:43:35.504452 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-04 00:43:35.504462 | orchestrator | Thursday 04 September 2025 00:43:33 +0000 (0:00:00.540) 0:01:13.628 **** 2025-09-04 00:43:35.504474 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:35.504486 | orchestrator | 2025-09-04 00:43:35.504500 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-04 00:43:35.504512 | orchestrator | Thursday 04 September 2025 00:43:34 +0000 (0:00:00.547) 0:01:14.175 **** 2025-09-04 00:43:35.504524 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:43:35.504537 | orchestrator | 2025-09-04 00:43:35.504550 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-04 00:43:35.504562 | orchestrator | Thursday 04 September 2025 00:43:34 +0000 (0:00:00.147) 0:01:14.323 **** 2025-09-04 00:43:35.504575 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'vg_name': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'}) 2025-09-04 00:43:35.504588 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'vg_name': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'}) 2025-09-04 00:43:35.504601 | orchestrator | 2025-09-04 00:43:35.504613 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-04 00:43:35.504631 | orchestrator | Thursday 04 September 2025 00:43:34 +0000 (0:00:00.175) 0:01:14.498 **** 2025-09-04 00:43:35.504661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504687 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504700 | orchestrator | 2025-09-04 00:43:35.504713 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-04 00:43:35.504725 | orchestrator | Thursday 04 September 2025 00:43:34 +0000 (0:00:00.156) 0:01:14.655 **** 2025-09-04 00:43:35.504737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504764 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504777 | orchestrator | 2025-09-04 00:43:35.504790 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-04 00:43:35.504804 | orchestrator | Thursday 04 September 2025 00:43:35 +0000 (0:00:00.160) 0:01:14.816 **** 2025-09-04 00:43:35.504816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'})  2025-09-04 00:43:35.504849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'})  2025-09-04 00:43:35.504860 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:43:35.504871 | orchestrator | 2025-09-04 00:43:35.504882 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-04 00:43:35.504892 | orchestrator | Thursday 04 September 2025 00:43:35 +0000 (0:00:00.155) 0:01:14.971 **** 2025-09-04 00:43:35.504903 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 00:43:35.504914 | orchestrator |  "lvm_report": { 2025-09-04 00:43:35.504925 | orchestrator |  "lv": [ 2025-09-04 00:43:35.504936 | orchestrator |  { 2025-09-04 00:43:35.504946 | orchestrator |  "lv_name": "osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b", 2025-09-04 00:43:35.504962 | orchestrator |  "vg_name": "ceph-4210a630-ee04-544c-a974-f1a5ed1af07b" 2025-09-04 00:43:35.504973 | orchestrator |  }, 2025-09-04 00:43:35.504983 | orchestrator |  { 2025-09-04 00:43:35.504994 | orchestrator |  "lv_name": "osd-block-f811dc1d-ef4c-5806-9269-19449158751d", 2025-09-04 00:43:35.505005 | orchestrator |  "vg_name": "ceph-f811dc1d-ef4c-5806-9269-19449158751d" 2025-09-04 00:43:35.505016 | orchestrator |  } 2025-09-04 00:43:35.505026 | orchestrator |  ], 2025-09-04 00:43:35.505037 | orchestrator |  "pv": [ 2025-09-04 00:43:35.505047 | orchestrator |  { 2025-09-04 00:43:35.505058 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-04 00:43:35.505069 | orchestrator |  "vg_name": "ceph-4210a630-ee04-544c-a974-f1a5ed1af07b" 2025-09-04 00:43:35.505080 | orchestrator |  }, 2025-09-04 00:43:35.505090 | orchestrator |  { 2025-09-04 00:43:35.505101 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-04 00:43:35.505112 | orchestrator |  "vg_name": "ceph-f811dc1d-ef4c-5806-9269-19449158751d" 2025-09-04 00:43:35.505123 | orchestrator |  } 2025-09-04 00:43:35.505133 | orchestrator |  ] 2025-09-04 00:43:35.505144 | orchestrator |  } 2025-09-04 00:43:35.505155 | orchestrator | } 2025-09-04 00:43:35.505166 | orchestrator | 2025-09-04 00:43:35.505177 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:43:35.505194 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-04 00:43:35.505205 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-04 00:43:35.505216 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-04 00:43:35.505227 | orchestrator | 2025-09-04 00:43:35.505238 | orchestrator | 2025-09-04 00:43:35.505248 | orchestrator | 2025-09-04 00:43:35.505259 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:43:35.505270 | orchestrator | Thursday 04 September 2025 00:43:35 +0000 (0:00:00.161) 0:01:15.133 **** 2025-09-04 00:43:35.505281 | orchestrator | =============================================================================== 2025-09-04 00:43:35.505292 | orchestrator | Create block VGs -------------------------------------------------------- 5.90s 2025-09-04 00:43:35.505302 | orchestrator | Create block LVs -------------------------------------------------------- 4.34s 2025-09-04 00:43:35.505313 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.03s 2025-09-04 00:43:35.505324 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.85s 2025-09-04 00:43:35.505334 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.69s 2025-09-04 00:43:35.505345 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2025-09-04 00:43:35.505356 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2025-09-04 00:43:35.505366 | orchestrator | Add known partitions to the list of available block devices ------------- 1.50s 2025-09-04 00:43:35.505383 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2025-09-04 00:43:35.890303 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-09-04 00:43:35.890375 | orchestrator | Print LVM report data --------------------------------------------------- 0.97s 2025-09-04 00:43:35.890387 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-09-04 00:43:35.890397 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-09-04 00:43:35.890407 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-09-04 00:43:35.890416 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.74s 2025-09-04 00:43:35.890426 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.74s 2025-09-04 00:43:35.890435 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-09-04 00:43:35.890444 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.71s 2025-09-04 00:43:35.890454 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.71s 2025-09-04 00:43:35.890463 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-09-04 00:43:48.262105 | orchestrator | 2025-09-04 00:43:48 | INFO  | Task eb50fec7-8901-4929-ab6b-4c1cf1d8e57b (facts) was prepared for execution. 2025-09-04 00:43:48.262205 | orchestrator | 2025-09-04 00:43:48 | INFO  | It takes a moment until task eb50fec7-8901-4929-ab6b-4c1cf1d8e57b (facts) has been started and output is visible here. 2025-09-04 00:44:00.647185 | orchestrator | 2025-09-04 00:44:00.647276 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-04 00:44:00.647291 | orchestrator | 2025-09-04 00:44:00.647303 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-04 00:44:00.647314 | orchestrator | Thursday 04 September 2025 00:43:52 +0000 (0:00:00.267) 0:00:00.267 **** 2025-09-04 00:44:00.647325 | orchestrator | ok: [testbed-manager] 2025-09-04 00:44:00.647336 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:44:00.647372 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:44:00.647383 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:44:00.647394 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:44:00.647404 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:44:00.647415 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:44:00.647425 | orchestrator | 2025-09-04 00:44:00.647436 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-04 00:44:00.647447 | orchestrator | Thursday 04 September 2025 00:43:53 +0000 (0:00:01.118) 0:00:01.386 **** 2025-09-04 00:44:00.647458 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:44:00.647469 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:44:00.647481 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:44:00.647491 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:44:00.647502 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:44:00.647512 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:44:00.647523 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:44:00.647534 | orchestrator | 2025-09-04 00:44:00.647544 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-04 00:44:00.647555 | orchestrator | 2025-09-04 00:44:00.647566 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-04 00:44:00.647577 | orchestrator | Thursday 04 September 2025 00:43:54 +0000 (0:00:01.240) 0:00:02.627 **** 2025-09-04 00:44:00.647588 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:44:00.647598 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:44:00.647609 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:44:00.647619 | orchestrator | ok: [testbed-manager] 2025-09-04 00:44:00.647630 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:44:00.647645 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:44:00.647664 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:44:00.647677 | orchestrator | 2025-09-04 00:44:00.647688 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-04 00:44:00.647699 | orchestrator | 2025-09-04 00:44:00.647709 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-04 00:44:00.647720 | orchestrator | Thursday 04 September 2025 00:43:59 +0000 (0:00:04.988) 0:00:07.615 **** 2025-09-04 00:44:00.647731 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:44:00.647742 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:44:00.647752 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:44:00.647763 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:44:00.647774 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:44:00.647784 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:44:00.647794 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:44:00.647805 | orchestrator | 2025-09-04 00:44:00.647849 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:44:00.647861 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647873 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647884 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647894 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647905 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647916 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647927 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:44:00.647945 | orchestrator | 2025-09-04 00:44:00.647956 | orchestrator | 2025-09-04 00:44:00.647967 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:44:00.647978 | orchestrator | Thursday 04 September 2025 00:44:00 +0000 (0:00:00.507) 0:00:08.123 **** 2025-09-04 00:44:00.647988 | orchestrator | =============================================================================== 2025-09-04 00:44:00.647999 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.99s 2025-09-04 00:44:00.648010 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-09-04 00:44:00.648020 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2025-09-04 00:44:00.648031 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-04 00:44:13.114740 | orchestrator | 2025-09-04 00:44:13 | INFO  | Task b68390fb-befa-42f2-9061-e98e2cca1e4d (frr) was prepared for execution. 2025-09-04 00:44:13.114883 | orchestrator | 2025-09-04 00:44:13 | INFO  | It takes a moment until task b68390fb-befa-42f2-9061-e98e2cca1e4d (frr) has been started and output is visible here. 2025-09-04 00:44:38.880250 | orchestrator | 2025-09-04 00:44:38.880340 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-04 00:44:38.880351 | orchestrator | 2025-09-04 00:44:38.880361 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-04 00:44:38.880369 | orchestrator | Thursday 04 September 2025 00:44:17 +0000 (0:00:00.235) 0:00:00.235 **** 2025-09-04 00:44:38.880391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:44:38.880400 | orchestrator | 2025-09-04 00:44:38.880408 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-04 00:44:38.880416 | orchestrator | Thursday 04 September 2025 00:44:17 +0000 (0:00:00.238) 0:00:00.474 **** 2025-09-04 00:44:38.880424 | orchestrator | changed: [testbed-manager] 2025-09-04 00:44:38.880432 | orchestrator | 2025-09-04 00:44:38.880440 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-04 00:44:38.880448 | orchestrator | Thursday 04 September 2025 00:44:18 +0000 (0:00:01.182) 0:00:01.657 **** 2025-09-04 00:44:38.880455 | orchestrator | changed: [testbed-manager] 2025-09-04 00:44:38.880463 | orchestrator | 2025-09-04 00:44:38.880476 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-04 00:44:38.880483 | orchestrator | Thursday 04 September 2025 00:44:28 +0000 (0:00:09.651) 0:00:11.308 **** 2025-09-04 00:44:38.880491 | orchestrator | ok: [testbed-manager] 2025-09-04 00:44:38.880499 | orchestrator | 2025-09-04 00:44:38.880506 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-04 00:44:38.880514 | orchestrator | Thursday 04 September 2025 00:44:29 +0000 (0:00:01.282) 0:00:12.591 **** 2025-09-04 00:44:38.880521 | orchestrator | changed: [testbed-manager] 2025-09-04 00:44:38.880529 | orchestrator | 2025-09-04 00:44:38.880536 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-04 00:44:38.880543 | orchestrator | Thursday 04 September 2025 00:44:30 +0000 (0:00:00.991) 0:00:13.583 **** 2025-09-04 00:44:38.880551 | orchestrator | ok: [testbed-manager] 2025-09-04 00:44:38.880558 | orchestrator | 2025-09-04 00:44:38.880566 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-04 00:44:38.880574 | orchestrator | Thursday 04 September 2025 00:44:31 +0000 (0:00:01.162) 0:00:14.746 **** 2025-09-04 00:44:38.880582 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:44:38.880589 | orchestrator | 2025-09-04 00:44:38.880596 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-04 00:44:38.880604 | orchestrator | Thursday 04 September 2025 00:44:32 +0000 (0:00:00.796) 0:00:15.542 **** 2025-09-04 00:44:38.880611 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:44:38.880619 | orchestrator | 2025-09-04 00:44:38.880627 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-04 00:44:38.880652 | orchestrator | Thursday 04 September 2025 00:44:32 +0000 (0:00:00.182) 0:00:15.724 **** 2025-09-04 00:44:38.880660 | orchestrator | changed: [testbed-manager] 2025-09-04 00:44:38.880668 | orchestrator | 2025-09-04 00:44:38.880675 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-04 00:44:38.880682 | orchestrator | Thursday 04 September 2025 00:44:33 +0000 (0:00:00.936) 0:00:16.661 **** 2025-09-04 00:44:38.880690 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-04 00:44:38.880697 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-04 00:44:38.880705 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-04 00:44:38.880713 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-04 00:44:38.880720 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-04 00:44:38.880727 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-04 00:44:38.880735 | orchestrator | 2025-09-04 00:44:38.880742 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-04 00:44:38.880749 | orchestrator | Thursday 04 September 2025 00:44:35 +0000 (0:00:02.163) 0:00:18.824 **** 2025-09-04 00:44:38.880757 | orchestrator | ok: [testbed-manager] 2025-09-04 00:44:38.880764 | orchestrator | 2025-09-04 00:44:38.880771 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-04 00:44:38.880779 | orchestrator | Thursday 04 September 2025 00:44:37 +0000 (0:00:01.383) 0:00:20.207 **** 2025-09-04 00:44:38.880825 | orchestrator | changed: [testbed-manager] 2025-09-04 00:44:38.880836 | orchestrator | 2025-09-04 00:44:38.880845 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:44:38.880854 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 00:44:38.880863 | orchestrator | 2025-09-04 00:44:38.880871 | orchestrator | 2025-09-04 00:44:38.880881 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:44:38.880890 | orchestrator | Thursday 04 September 2025 00:44:38 +0000 (0:00:01.460) 0:00:21.668 **** 2025-09-04 00:44:38.880899 | orchestrator | =============================================================================== 2025-09-04 00:44:38.880908 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.65s 2025-09-04 00:44:38.880916 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.16s 2025-09-04 00:44:38.880925 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.46s 2025-09-04 00:44:38.880935 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.38s 2025-09-04 00:44:38.880958 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.28s 2025-09-04 00:44:38.880967 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.18s 2025-09-04 00:44:38.880976 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2025-09-04 00:44:38.880985 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2025-09-04 00:44:38.880994 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.94s 2025-09-04 00:44:38.881003 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.80s 2025-09-04 00:44:38.881012 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2025-09-04 00:44:38.881021 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.18s 2025-09-04 00:44:39.182727 | orchestrator | 2025-09-04 00:44:39.186827 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Sep 4 00:44:39 UTC 2025 2025-09-04 00:44:39.186890 | orchestrator | 2025-09-04 00:44:41.080380 | orchestrator | 2025-09-04 00:44:41 | INFO  | Collection nutshell is prepared for execution 2025-09-04 00:44:41.080480 | orchestrator | 2025-09-04 00:44:41 | INFO  | D [0] - dotfiles 2025-09-04 00:44:51.158442 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [0] - homer 2025-09-04 00:44:51.158552 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [0] - netdata 2025-09-04 00:44:51.158567 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [0] - openstackclient 2025-09-04 00:44:51.158579 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [0] - phpmyadmin 2025-09-04 00:44:51.158590 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [0] - common 2025-09-04 00:44:51.162599 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [1] -- loadbalancer 2025-09-04 00:44:51.162625 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [2] --- opensearch 2025-09-04 00:44:51.162637 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [2] --- mariadb-ng 2025-09-04 00:44:51.162936 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [3] ---- horizon 2025-09-04 00:44:51.163288 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [3] ---- keystone 2025-09-04 00:44:51.163322 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [4] ----- neutron 2025-09-04 00:44:51.163871 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ wait-for-nova 2025-09-04 00:44:51.164160 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [5] ------ octavia 2025-09-04 00:44:51.165826 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- barbican 2025-09-04 00:44:51.166336 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- designate 2025-09-04 00:44:51.166436 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- ironic 2025-09-04 00:44:51.166452 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- placement 2025-09-04 00:44:51.166464 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- magnum 2025-09-04 00:44:51.167113 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [1] -- openvswitch 2025-09-04 00:44:51.167141 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [2] --- ovn 2025-09-04 00:44:51.167748 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [1] -- memcached 2025-09-04 00:44:51.167771 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [1] -- redis 2025-09-04 00:44:51.167809 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [1] -- rabbitmq-ng 2025-09-04 00:44:51.168631 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [0] - kubernetes 2025-09-04 00:44:51.171155 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [1] -- kubeconfig 2025-09-04 00:44:51.171455 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [1] -- copy-kubeconfig 2025-09-04 00:44:51.171529 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [0] - ceph 2025-09-04 00:44:51.174412 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [1] -- ceph-pools 2025-09-04 00:44:51.174439 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [2] --- copy-ceph-keys 2025-09-04 00:44:51.174456 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [3] ---- cephclient 2025-09-04 00:44:51.174698 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-04 00:44:51.174941 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [4] ----- wait-for-keystone 2025-09-04 00:44:51.174962 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-04 00:44:51.175191 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ glance 2025-09-04 00:44:51.175482 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ cinder 2025-09-04 00:44:51.175503 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ nova 2025-09-04 00:44:51.176300 | orchestrator | 2025-09-04 00:44:51 | INFO  | A [4] ----- prometheus 2025-09-04 00:44:51.176322 | orchestrator | 2025-09-04 00:44:51 | INFO  | D [5] ------ grafana 2025-09-04 00:44:51.390318 | orchestrator | 2025-09-04 00:44:51 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-04 00:44:51.390414 | orchestrator | 2025-09-04 00:44:51 | INFO  | Tasks are running in the background 2025-09-04 00:44:54.276726 | orchestrator | 2025-09-04 00:44:54 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-04 00:44:56.421715 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:44:56.422115 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:44:56.422924 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:44:56.425281 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:44:56.425985 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:44:56.426765 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:44:56.428364 | orchestrator | 2025-09-04 00:44:56 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:44:56.428386 | orchestrator | 2025-09-04 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:44:59.475866 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:44:59.475995 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:44:59.476380 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:44:59.476846 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:44:59.477405 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:44:59.477862 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:44:59.478436 | orchestrator | 2025-09-04 00:44:59 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:44:59.478462 | orchestrator | 2025-09-04 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:02.517646 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:02.517886 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:02.518546 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:02.519238 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:02.519919 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:02.520563 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:02.521398 | orchestrator | 2025-09-04 00:45:02 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:02.521428 | orchestrator | 2025-09-04 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:05.597823 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:05.597913 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:05.597927 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:05.597939 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:05.597950 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:05.597961 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:05.597971 | orchestrator | 2025-09-04 00:45:05 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:05.597982 | orchestrator | 2025-09-04 00:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:08.702748 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:08.703000 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:08.703750 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:08.704442 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:08.705261 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:08.709001 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:08.709638 | orchestrator | 2025-09-04 00:45:08 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:08.709663 | orchestrator | 2025-09-04 00:45:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:11.883218 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:11.883337 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:11.883353 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:11.883366 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:11.883378 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:11.883389 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:11.883401 | orchestrator | 2025-09-04 00:45:11 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:11.883413 | orchestrator | 2025-09-04 00:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:14.953970 | orchestrator | 2025-09-04 00:45:14 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:14.974244 | orchestrator | 2025-09-04 00:45:14 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:14.984306 | orchestrator | 2025-09-04 00:45:14 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:15.006255 | orchestrator | 2025-09-04 00:45:15 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:15.027344 | orchestrator | 2025-09-04 00:45:15 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:15.054080 | orchestrator | 2025-09-04 00:45:15 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:15.057661 | orchestrator | 2025-09-04 00:45:15 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:15.057690 | orchestrator | 2025-09-04 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:18.180704 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:18.184590 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:18.190139 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state STARTED 2025-09-04 00:45:18.196488 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:18.206610 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:18.207710 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:18.207750 | orchestrator | 2025-09-04 00:45:18 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:18.207797 | orchestrator | 2025-09-04 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:21.632560 | orchestrator | 2025-09-04 00:45:21.632660 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-04 00:45:21.632675 | orchestrator | 2025-09-04 00:45:21.632685 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-04 00:45:21.632695 | orchestrator | Thursday 04 September 2025 00:45:05 +0000 (0:00:00.708) 0:00:00.708 **** 2025-09-04 00:45:21.632705 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:45:21.632716 | orchestrator | changed: [testbed-manager] 2025-09-04 00:45:21.632726 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:45:21.632735 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:45:21.632745 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:45:21.632810 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:45:21.632821 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:45:21.632831 | orchestrator | 2025-09-04 00:45:21.632841 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-04 00:45:21.632851 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:04.400) 0:00:05.109 **** 2025-09-04 00:45:21.632861 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-04 00:45:21.632872 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-04 00:45:21.632881 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-04 00:45:21.632891 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-04 00:45:21.632901 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-04 00:45:21.632911 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-04 00:45:21.632921 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-04 00:45:21.632930 | orchestrator | 2025-09-04 00:45:21.632940 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-04 00:45:21.632950 | orchestrator | Thursday 04 September 2025 00:45:12 +0000 (0:00:02.440) 0:00:07.550 **** 2025-09-04 00:45:21.632972 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:10.742983', 'end': '2025-09-04 00:45:10.753260', 'delta': '0:00:00.010277', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633004 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:10.720880', 'end': '2025-09-04 00:45:10.728436', 'delta': '0:00:00.007556', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633015 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:10.788943', 'end': '2025-09-04 00:45:10.797717', 'delta': '0:00:00.008774', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633052 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:11.108479', 'end': '2025-09-04 00:45:11.117338', 'delta': '0:00:00.008859', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633327 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:11.534442', 'end': '2025-09-04 00:45:11.542150', 'delta': '0:00:00.007708', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633342 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:12.008469', 'end': '2025-09-04 00:45:12.020113', 'delta': '0:00:00.011644', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633371 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-04 00:45:11.952758', 'end': '2025-09-04 00:45:11.960430', 'delta': '0:00:00.007672', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-04 00:45:21.633383 | orchestrator | 2025-09-04 00:45:21.633395 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-04 00:45:21.633407 | orchestrator | Thursday 04 September 2025 00:45:14 +0000 (0:00:01.899) 0:00:09.449 **** 2025-09-04 00:45:21.633418 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-04 00:45:21.633430 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-04 00:45:21.633441 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-04 00:45:21.633450 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-04 00:45:21.633460 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-04 00:45:21.633469 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-04 00:45:21.633478 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-04 00:45:21.633488 | orchestrator | 2025-09-04 00:45:21.633497 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-04 00:45:21.633507 | orchestrator | Thursday 04 September 2025 00:45:16 +0000 (0:00:02.706) 0:00:12.156 **** 2025-09-04 00:45:21.633516 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-04 00:45:21.633526 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-04 00:45:21.633535 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-04 00:45:21.633545 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-04 00:45:21.633554 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-04 00:45:21.633563 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-04 00:45:21.633572 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-04 00:45:21.633582 | orchestrator | 2025-09-04 00:45:21.633592 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:45:21.633610 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633621 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633631 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633641 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633659 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633669 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633678 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:45:21.633688 | orchestrator | 2025-09-04 00:45:21.633697 | orchestrator | 2025-09-04 00:45:21.633707 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:45:21.633716 | orchestrator | Thursday 04 September 2025 00:45:20 +0000 (0:00:03.739) 0:00:15.895 **** 2025-09-04 00:45:21.633726 | orchestrator | =============================================================================== 2025-09-04 00:45:21.633736 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.40s 2025-09-04 00:45:21.633745 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.74s 2025-09-04 00:45:21.633777 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.71s 2025-09-04 00:45:21.633787 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.44s 2025-09-04 00:45:21.633797 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.90s 2025-09-04 00:45:21.633807 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:21.633817 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:21.633826 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task 6524e23f-fda9-4137-8292-f56015a88cc0 is in state SUCCESS 2025-09-04 00:45:21.633836 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:21.633845 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:21.633855 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:21.633864 | orchestrator | 2025-09-04 00:45:21 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:21.633874 | orchestrator | 2025-09-04 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:24.674312 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:24.674429 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:24.674444 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:24.674457 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:24.674468 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:24.674479 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:24.674501 | orchestrator | 2025-09-04 00:45:24 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:24.675278 | orchestrator | 2025-09-04 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:27.865934 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:27.869927 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:27.873491 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:27.873908 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:27.877060 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:27.877579 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:27.881065 | orchestrator | 2025-09-04 00:45:27 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:27.881096 | orchestrator | 2025-09-04 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:30.914879 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:30.915201 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:30.915972 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:30.916816 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:30.919322 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:30.926422 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:30.926968 | orchestrator | 2025-09-04 00:45:30 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:30.927282 | orchestrator | 2025-09-04 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:34.172949 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:34.174978 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:34.175926 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:34.179029 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:34.180550 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:34.180575 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:34.181715 | orchestrator | 2025-09-04 00:45:34 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:34.181737 | orchestrator | 2025-09-04 00:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:37.236313 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:37.237557 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:37.239823 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:37.241415 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:37.245116 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:37.246839 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:37.250113 | orchestrator | 2025-09-04 00:45:37 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:37.250136 | orchestrator | 2025-09-04 00:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:40.488051 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:40.488162 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:40.488178 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:40.488190 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state STARTED 2025-09-04 00:45:40.488201 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:40.488212 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:40.488223 | orchestrator | 2025-09-04 00:45:40 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:40.488234 | orchestrator | 2025-09-04 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:43.398211 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:43.399504 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:43.401973 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:43.402437 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task 61c49f6c-1655-437c-b0dc-6debe6198808 is in state SUCCESS 2025-09-04 00:45:43.415661 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:43.418630 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:43.433156 | orchestrator | 2025-09-04 00:45:43 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:43.434103 | orchestrator | 2025-09-04 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:46.846013 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:46.846174 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:46.846187 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:46.846198 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:46.846208 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:46.846218 | orchestrator | 2025-09-04 00:45:46 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:46.846228 | orchestrator | 2025-09-04 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:49.762049 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:49.762140 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:49.762150 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:49.762180 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state STARTED 2025-09-04 00:45:49.762188 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:49.762195 | orchestrator | 2025-09-04 00:45:49 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:49.762203 | orchestrator | 2025-09-04 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:52.715630 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:52.716455 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:52.717272 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:52.717780 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task 53de278f-d1a9-4c7d-9d95-ed9e95eaeb8f is in state SUCCESS 2025-09-04 00:45:52.718825 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:52.720226 | orchestrator | 2025-09-04 00:45:52 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:52.720505 | orchestrator | 2025-09-04 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:55.794241 | orchestrator | 2025-09-04 00:45:55 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:55.794358 | orchestrator | 2025-09-04 00:45:55 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:55.794374 | orchestrator | 2025-09-04 00:45:55 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:55.794386 | orchestrator | 2025-09-04 00:45:55 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:55.794396 | orchestrator | 2025-09-04 00:45:55 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:55.794408 | orchestrator | 2025-09-04 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:45:58.818241 | orchestrator | 2025-09-04 00:45:58 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:45:58.820262 | orchestrator | 2025-09-04 00:45:58 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:45:58.823128 | orchestrator | 2025-09-04 00:45:58 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:45:58.827137 | orchestrator | 2025-09-04 00:45:58 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:45:58.830246 | orchestrator | 2025-09-04 00:45:58 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:45:58.830274 | orchestrator | 2025-09-04 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:01.895401 | orchestrator | 2025-09-04 00:46:01 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:01.896151 | orchestrator | 2025-09-04 00:46:01 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:01.898218 | orchestrator | 2025-09-04 00:46:01 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:01.899208 | orchestrator | 2025-09-04 00:46:01 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:46:01.900900 | orchestrator | 2025-09-04 00:46:01 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:01.900957 | orchestrator | 2025-09-04 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:05.087470 | orchestrator | 2025-09-04 00:46:05 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:05.088586 | orchestrator | 2025-09-04 00:46:05 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:05.091747 | orchestrator | 2025-09-04 00:46:05 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:05.094124 | orchestrator | 2025-09-04 00:46:05 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:46:05.096637 | orchestrator | 2025-09-04 00:46:05 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:05.096663 | orchestrator | 2025-09-04 00:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:08.153119 | orchestrator | 2025-09-04 00:46:08 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:08.160506 | orchestrator | 2025-09-04 00:46:08 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:08.160833 | orchestrator | 2025-09-04 00:46:08 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:08.163046 | orchestrator | 2025-09-04 00:46:08 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:46:08.179798 | orchestrator | 2025-09-04 00:46:08 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:08.182180 | orchestrator | 2025-09-04 00:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:11.273017 | orchestrator | 2025-09-04 00:46:11 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:11.274357 | orchestrator | 2025-09-04 00:46:11 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:11.275639 | orchestrator | 2025-09-04 00:46:11 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:11.276986 | orchestrator | 2025-09-04 00:46:11 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:46:11.278278 | orchestrator | 2025-09-04 00:46:11 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:11.278303 | orchestrator | 2025-09-04 00:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:14.332137 | orchestrator | 2025-09-04 00:46:14 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:14.334156 | orchestrator | 2025-09-04 00:46:14 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:14.336030 | orchestrator | 2025-09-04 00:46:14 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:14.337811 | orchestrator | 2025-09-04 00:46:14 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state STARTED 2025-09-04 00:46:14.341015 | orchestrator | 2025-09-04 00:46:14 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:14.341038 | orchestrator | 2025-09-04 00:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:17.393990 | orchestrator | 2025-09-04 00:46:17 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:17.402647 | orchestrator | 2025-09-04 00:46:17 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state STARTED 2025-09-04 00:46:17.402858 | orchestrator | 2025-09-04 00:46:17 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:17.402879 | orchestrator | 2025-09-04 00:46:17 | INFO  | Task 2540858e-ebc2-403c-ab6e-43ea110be254 is in state SUCCESS 2025-09-04 00:46:17.402918 | orchestrator | 2025-09-04 00:46:17 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:17.402930 | orchestrator | 2025-09-04 00:46:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:17.404131 | orchestrator | 2025-09-04 00:46:17.404169 | orchestrator | 2025-09-04 00:46:17.404181 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-04 00:46:17.404192 | orchestrator | 2025-09-04 00:46:17.404203 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-04 00:46:17.404214 | orchestrator | Thursday 04 September 2025 00:45:03 +0000 (0:00:00.543) 0:00:00.543 **** 2025-09-04 00:46:17.404225 | orchestrator | ok: [testbed-manager] => { 2025-09-04 00:46:17.404238 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-04 00:46:17.404250 | orchestrator | } 2025-09-04 00:46:17.404261 | orchestrator | 2025-09-04 00:46:17.404272 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-04 00:46:17.404282 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:00.420) 0:00:00.963 **** 2025-09-04 00:46:17.404293 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.404304 | orchestrator | 2025-09-04 00:46:17.404315 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-04 00:46:17.404325 | orchestrator | Thursday 04 September 2025 00:45:05 +0000 (0:00:01.342) 0:00:02.306 **** 2025-09-04 00:46:17.404336 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-04 00:46:17.404346 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-04 00:46:17.404357 | orchestrator | 2025-09-04 00:46:17.404367 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-04 00:46:17.404378 | orchestrator | Thursday 04 September 2025 00:45:07 +0000 (0:00:01.857) 0:00:04.163 **** 2025-09-04 00:46:17.404388 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.404398 | orchestrator | 2025-09-04 00:46:17.404409 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-04 00:46:17.404420 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:01.966) 0:00:06.130 **** 2025-09-04 00:46:17.404430 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.404440 | orchestrator | 2025-09-04 00:46:17.404451 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-04 00:46:17.404461 | orchestrator | Thursday 04 September 2025 00:45:10 +0000 (0:00:01.469) 0:00:07.600 **** 2025-09-04 00:46:17.404499 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-04 00:46:17.404511 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.404522 | orchestrator | 2025-09-04 00:46:17.404533 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-04 00:46:17.404545 | orchestrator | Thursday 04 September 2025 00:45:37 +0000 (0:00:26.540) 0:00:34.140 **** 2025-09-04 00:46:17.404556 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.404568 | orchestrator | 2025-09-04 00:46:17.404579 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:46:17.404591 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.404604 | orchestrator | 2025-09-04 00:46:17.404615 | orchestrator | 2025-09-04 00:46:17.404626 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:46:17.404638 | orchestrator | Thursday 04 September 2025 00:45:42 +0000 (0:00:05.403) 0:00:39.543 **** 2025-09-04 00:46:17.404649 | orchestrator | =============================================================================== 2025-09-04 00:46:17.404660 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.54s 2025-09-04 00:46:17.404671 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 5.40s 2025-09-04 00:46:17.404719 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.97s 2025-09-04 00:46:17.404763 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.86s 2025-09-04 00:46:17.404777 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.47s 2025-09-04 00:46:17.404790 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.34s 2025-09-04 00:46:17.404802 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2025-09-04 00:46:17.404814 | orchestrator | 2025-09-04 00:46:17.404827 | orchestrator | 2025-09-04 00:46:17.404838 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-04 00:46:17.404851 | orchestrator | 2025-09-04 00:46:17.404863 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-04 00:46:17.404876 | orchestrator | Thursday 04 September 2025 00:45:03 +0000 (0:00:00.797) 0:00:00.797 **** 2025-09-04 00:46:17.404888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-04 00:46:17.404902 | orchestrator | 2025-09-04 00:46:17.404914 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-04 00:46:17.404926 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:00.676) 0:00:01.473 **** 2025-09-04 00:46:17.404938 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-04 00:46:17.404951 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-04 00:46:17.404964 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-04 00:46:17.404977 | orchestrator | 2025-09-04 00:46:17.404988 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-04 00:46:17.405001 | orchestrator | Thursday 04 September 2025 00:45:07 +0000 (0:00:02.717) 0:00:04.191 **** 2025-09-04 00:46:17.405013 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.405026 | orchestrator | 2025-09-04 00:46:17.405039 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-04 00:46:17.405052 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:02.126) 0:00:06.318 **** 2025-09-04 00:46:17.405076 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-04 00:46:17.405088 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.405098 | orchestrator | 2025-09-04 00:46:17.405109 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-04 00:46:17.405119 | orchestrator | Thursday 04 September 2025 00:45:44 +0000 (0:00:34.711) 0:00:41.029 **** 2025-09-04 00:46:17.405130 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.405140 | orchestrator | 2025-09-04 00:46:17.405150 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-04 00:46:17.405161 | orchestrator | Thursday 04 September 2025 00:45:45 +0000 (0:00:01.137) 0:00:42.167 **** 2025-09-04 00:46:17.405171 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.405182 | orchestrator | 2025-09-04 00:46:17.405192 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-04 00:46:17.405203 | orchestrator | Thursday 04 September 2025 00:45:46 +0000 (0:00:00.950) 0:00:43.117 **** 2025-09-04 00:46:17.405213 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.405223 | orchestrator | 2025-09-04 00:46:17.405234 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-04 00:46:17.405244 | orchestrator | Thursday 04 September 2025 00:45:48 +0000 (0:00:02.714) 0:00:45.832 **** 2025-09-04 00:46:17.405255 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.405265 | orchestrator | 2025-09-04 00:46:17.405275 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-04 00:46:17.405286 | orchestrator | Thursday 04 September 2025 00:45:50 +0000 (0:00:01.612) 0:00:47.444 **** 2025-09-04 00:46:17.405297 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.405314 | orchestrator | 2025-09-04 00:46:17.405325 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-04 00:46:17.405335 | orchestrator | Thursday 04 September 2025 00:45:51 +0000 (0:00:00.835) 0:00:48.280 **** 2025-09-04 00:46:17.405346 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.405356 | orchestrator | 2025-09-04 00:46:17.405367 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:46:17.405377 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.405388 | orchestrator | 2025-09-04 00:46:17.405398 | orchestrator | 2025-09-04 00:46:17.405409 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:46:17.405419 | orchestrator | Thursday 04 September 2025 00:45:51 +0000 (0:00:00.508) 0:00:48.789 **** 2025-09-04 00:46:17.405430 | orchestrator | =============================================================================== 2025-09-04 00:46:17.405445 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.71s 2025-09-04 00:46:17.405456 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.72s 2025-09-04 00:46:17.405466 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.71s 2025-09-04 00:46:17.405477 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.13s 2025-09-04 00:46:17.405487 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.61s 2025-09-04 00:46:17.405498 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.14s 2025-09-04 00:46:17.405508 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.95s 2025-09-04 00:46:17.405519 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.84s 2025-09-04 00:46:17.405529 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.68s 2025-09-04 00:46:17.405540 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.51s 2025-09-04 00:46:17.405550 | orchestrator | 2025-09-04 00:46:17.405561 | orchestrator | 2025-09-04 00:46:17.405571 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:46:17.405582 | orchestrator | 2025-09-04 00:46:17.405592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:46:17.405603 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-04 00:46:17.405613 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-04 00:46:17.405624 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-04 00:46:17.405634 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-04 00:46:17.405645 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-04 00:46:17.405655 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-04 00:46:17.405665 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-04 00:46:17.405676 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-04 00:46:17.405703 | orchestrator | 2025-09-04 00:46:17.405714 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-04 00:46:17.405724 | orchestrator | 2025-09-04 00:46:17.405735 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-04 00:46:17.405745 | orchestrator | Thursday 04 September 2025 00:45:05 +0000 (0:00:01.128) 0:00:01.407 **** 2025-09-04 00:46:17.405769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:46:17.405782 | orchestrator | 2025-09-04 00:46:17.405793 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-04 00:46:17.405803 | orchestrator | Thursday 04 September 2025 00:45:08 +0000 (0:00:02.771) 0:00:04.179 **** 2025-09-04 00:46:17.405819 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:46:17.405830 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.405841 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:46:17.405851 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:46:17.405862 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:46:17.405877 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:46:17.405888 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:46:17.405899 | orchestrator | 2025-09-04 00:46:17.405910 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-04 00:46:17.405920 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:01.636) 0:00:05.816 **** 2025-09-04 00:46:17.405931 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.405941 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:46:17.405951 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:46:17.405962 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:46:17.405972 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:46:17.405982 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:46:17.405993 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:46:17.406003 | orchestrator | 2025-09-04 00:46:17.406074 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-04 00:46:17.406089 | orchestrator | Thursday 04 September 2025 00:45:14 +0000 (0:00:04.294) 0:00:10.110 **** 2025-09-04 00:46:17.406099 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:46:17.406110 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:46:17.406121 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.406131 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:46:17.406142 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:46:17.406152 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:46:17.406163 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:46:17.406173 | orchestrator | 2025-09-04 00:46:17.406184 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-04 00:46:17.406194 | orchestrator | Thursday 04 September 2025 00:45:17 +0000 (0:00:02.966) 0:00:13.077 **** 2025-09-04 00:46:17.406205 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.406215 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:46:17.406226 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:46:17.406236 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:46:17.406246 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:46:17.406257 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:46:17.406267 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:46:17.406277 | orchestrator | 2025-09-04 00:46:17.406288 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-04 00:46:17.406299 | orchestrator | Thursday 04 September 2025 00:45:33 +0000 (0:00:16.205) 0:00:29.282 **** 2025-09-04 00:46:17.406309 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:46:17.406320 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:46:17.406330 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:46:17.406340 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:46:17.406351 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:46:17.406361 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:46:17.406371 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.406381 | orchestrator | 2025-09-04 00:46:17.406392 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-04 00:46:17.406407 | orchestrator | Thursday 04 September 2025 00:45:56 +0000 (0:00:23.075) 0:00:52.358 **** 2025-09-04 00:46:17.406419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:46:17.406432 | orchestrator | 2025-09-04 00:46:17.406442 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-04 00:46:17.406453 | orchestrator | Thursday 04 September 2025 00:45:57 +0000 (0:00:01.150) 0:00:53.508 **** 2025-09-04 00:46:17.406470 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-04 00:46:17.406481 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-04 00:46:17.406491 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-04 00:46:17.406501 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-04 00:46:17.406512 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-04 00:46:17.406522 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-04 00:46:17.406533 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-04 00:46:17.406543 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-04 00:46:17.406554 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-04 00:46:17.406564 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-04 00:46:17.406574 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-04 00:46:17.406585 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-04 00:46:17.406595 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-04 00:46:17.406605 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-04 00:46:17.406616 | orchestrator | 2025-09-04 00:46:17.406626 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-04 00:46:17.406637 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:03.638) 0:00:57.147 **** 2025-09-04 00:46:17.406648 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.406658 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:46:17.406669 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:46:17.406679 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:46:17.406721 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:46:17.406731 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:46:17.406742 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:46:17.406752 | orchestrator | 2025-09-04 00:46:17.406763 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-04 00:46:17.406774 | orchestrator | Thursday 04 September 2025 00:46:02 +0000 (0:00:01.035) 0:00:58.182 **** 2025-09-04 00:46:17.406784 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:46:17.406795 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:46:17.406806 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:46:17.406816 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.406827 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:46:17.406837 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:46:17.406848 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:46:17.406858 | orchestrator | 2025-09-04 00:46:17.406869 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-04 00:46:17.406888 | orchestrator | Thursday 04 September 2025 00:46:03 +0000 (0:00:01.504) 0:00:59.687 **** 2025-09-04 00:46:17.406899 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:46:17.406910 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.406920 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:46:17.406931 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:46:17.406941 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:46:17.406952 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:46:17.406962 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:46:17.406973 | orchestrator | 2025-09-04 00:46:17.406984 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-04 00:46:17.406994 | orchestrator | Thursday 04 September 2025 00:46:06 +0000 (0:00:02.625) 0:01:02.313 **** 2025-09-04 00:46:17.407005 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:46:17.407016 | orchestrator | ok: [testbed-manager] 2025-09-04 00:46:17.407026 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:46:17.407036 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:46:17.407047 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:46:17.407057 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:46:17.407067 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:46:17.407078 | orchestrator | 2025-09-04 00:46:17.407089 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-04 00:46:17.407106 | orchestrator | Thursday 04 September 2025 00:46:08 +0000 (0:00:02.507) 0:01:04.820 **** 2025-09-04 00:46:17.407117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-04 00:46:17.407129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:46:17.407140 | orchestrator | 2025-09-04 00:46:17.407151 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-04 00:46:17.407161 | orchestrator | Thursday 04 September 2025 00:46:10 +0000 (0:00:02.080) 0:01:06.902 **** 2025-09-04 00:46:17.407172 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.407183 | orchestrator | 2025-09-04 00:46:17.407193 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-04 00:46:17.407204 | orchestrator | Thursday 04 September 2025 00:46:13 +0000 (0:00:02.217) 0:01:09.119 **** 2025-09-04 00:46:17.407214 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:46:17.407225 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:46:17.407235 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:46:17.407246 | orchestrator | changed: [testbed-manager] 2025-09-04 00:46:17.407256 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:46:17.407267 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:46:17.407277 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:46:17.407288 | orchestrator | 2025-09-04 00:46:17.407307 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:46:17.407319 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407330 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407341 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407351 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407362 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407373 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407384 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:46:17.407394 | orchestrator | 2025-09-04 00:46:17.407405 | orchestrator | 2025-09-04 00:46:17.407415 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:46:17.407426 | orchestrator | Thursday 04 September 2025 00:46:16 +0000 (0:00:03.525) 0:01:12.645 **** 2025-09-04 00:46:17.407437 | orchestrator | =============================================================================== 2025-09-04 00:46:17.407447 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 23.08s 2025-09-04 00:46:17.407458 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.21s 2025-09-04 00:46:17.407468 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.29s 2025-09-04 00:46:17.407479 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.64s 2025-09-04 00:46:17.407489 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.53s 2025-09-04 00:46:17.407499 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.97s 2025-09-04 00:46:17.407510 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.77s 2025-09-04 00:46:17.407526 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.63s 2025-09-04 00:46:17.407537 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.51s 2025-09-04 00:46:17.407547 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.22s 2025-09-04 00:46:17.407558 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.08s 2025-09-04 00:46:17.407573 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.64s 2025-09-04 00:46:17.407584 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.50s 2025-09-04 00:46:17.407595 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.15s 2025-09-04 00:46:17.407605 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-09-04 00:46:17.407616 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.04s 2025-09-04 00:46:20.459091 | orchestrator | 2025-09-04 00:46:20 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:20.461863 | orchestrator | 2025-09-04 00:46:20 | INFO  | Task c9ef39a4-ab45-403c-ad9c-9e63e9c7ccd0 is in state SUCCESS 2025-09-04 00:46:20.464318 | orchestrator | 2025-09-04 00:46:20 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:20.464347 | orchestrator | 2025-09-04 00:46:20 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:20.464359 | orchestrator | 2025-09-04 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:23.499586 | orchestrator | 2025-09-04 00:46:23 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:23.501270 | orchestrator | 2025-09-04 00:46:23 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:23.501856 | orchestrator | 2025-09-04 00:46:23 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:23.502097 | orchestrator | 2025-09-04 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:26.542873 | orchestrator | 2025-09-04 00:46:26 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:26.544605 | orchestrator | 2025-09-04 00:46:26 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:26.546789 | orchestrator | 2025-09-04 00:46:26 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:26.546840 | orchestrator | 2025-09-04 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:29.584011 | orchestrator | 2025-09-04 00:46:29 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:29.585539 | orchestrator | 2025-09-04 00:46:29 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:29.586501 | orchestrator | 2025-09-04 00:46:29 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:29.586658 | orchestrator | 2025-09-04 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:32.624195 | orchestrator | 2025-09-04 00:46:32 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:32.625177 | orchestrator | 2025-09-04 00:46:32 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:32.626347 | orchestrator | 2025-09-04 00:46:32 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:32.628830 | orchestrator | 2025-09-04 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:35.677400 | orchestrator | 2025-09-04 00:46:35 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:35.678726 | orchestrator | 2025-09-04 00:46:35 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:35.682169 | orchestrator | 2025-09-04 00:46:35 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:35.682806 | orchestrator | 2025-09-04 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:38.730237 | orchestrator | 2025-09-04 00:46:38 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:38.730339 | orchestrator | 2025-09-04 00:46:38 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:38.730358 | orchestrator | 2025-09-04 00:46:38 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:38.730373 | orchestrator | 2025-09-04 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:41.779934 | orchestrator | 2025-09-04 00:46:41 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:41.783691 | orchestrator | 2025-09-04 00:46:41 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:41.785840 | orchestrator | 2025-09-04 00:46:41 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:41.786464 | orchestrator | 2025-09-04 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:44.830901 | orchestrator | 2025-09-04 00:46:44 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:44.831961 | orchestrator | 2025-09-04 00:46:44 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:44.833536 | orchestrator | 2025-09-04 00:46:44 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:44.833679 | orchestrator | 2025-09-04 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:47.900300 | orchestrator | 2025-09-04 00:46:47 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:47.903013 | orchestrator | 2025-09-04 00:46:47 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:47.905242 | orchestrator | 2025-09-04 00:46:47 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:47.905773 | orchestrator | 2025-09-04 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:50.955830 | orchestrator | 2025-09-04 00:46:50 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:50.957027 | orchestrator | 2025-09-04 00:46:50 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:50.958534 | orchestrator | 2025-09-04 00:46:50 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:50.958554 | orchestrator | 2025-09-04 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:54.018332 | orchestrator | 2025-09-04 00:46:54 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:54.022102 | orchestrator | 2025-09-04 00:46:54 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:54.027967 | orchestrator | 2025-09-04 00:46:54 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:54.028023 | orchestrator | 2025-09-04 00:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:46:57.069404 | orchestrator | 2025-09-04 00:46:57 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:46:57.231458 | orchestrator | 2025-09-04 00:46:57 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:46:57.231524 | orchestrator | 2025-09-04 00:46:57 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:46:57.231537 | orchestrator | 2025-09-04 00:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:00.112750 | orchestrator | 2025-09-04 00:47:00 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:00.113408 | orchestrator | 2025-09-04 00:47:00 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:00.114099 | orchestrator | 2025-09-04 00:47:00 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:00.114123 | orchestrator | 2025-09-04 00:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:03.152923 | orchestrator | 2025-09-04 00:47:03 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:03.154586 | orchestrator | 2025-09-04 00:47:03 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:03.155601 | orchestrator | 2025-09-04 00:47:03 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:03.156609 | orchestrator | 2025-09-04 00:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:06.204085 | orchestrator | 2025-09-04 00:47:06 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:06.204554 | orchestrator | 2025-09-04 00:47:06 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:06.205560 | orchestrator | 2025-09-04 00:47:06 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:06.205854 | orchestrator | 2025-09-04 00:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:09.250802 | orchestrator | 2025-09-04 00:47:09 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:09.251213 | orchestrator | 2025-09-04 00:47:09 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:09.252661 | orchestrator | 2025-09-04 00:47:09 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:09.253126 | orchestrator | 2025-09-04 00:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:12.293096 | orchestrator | 2025-09-04 00:47:12 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:12.294551 | orchestrator | 2025-09-04 00:47:12 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:12.296378 | orchestrator | 2025-09-04 00:47:12 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:12.296470 | orchestrator | 2025-09-04 00:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:15.347677 | orchestrator | 2025-09-04 00:47:15 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:15.353683 | orchestrator | 2025-09-04 00:47:15 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:15.353714 | orchestrator | 2025-09-04 00:47:15 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:15.353727 | orchestrator | 2025-09-04 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:18.396211 | orchestrator | 2025-09-04 00:47:18 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:18.396291 | orchestrator | 2025-09-04 00:47:18 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:18.396326 | orchestrator | 2025-09-04 00:47:18 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:18.396337 | orchestrator | 2025-09-04 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:21.431569 | orchestrator | 2025-09-04 00:47:21 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:21.432423 | orchestrator | 2025-09-04 00:47:21 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:21.434068 | orchestrator | 2025-09-04 00:47:21 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:21.434093 | orchestrator | 2025-09-04 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:24.475572 | orchestrator | 2025-09-04 00:47:24 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:24.477016 | orchestrator | 2025-09-04 00:47:24 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:24.478386 | orchestrator | 2025-09-04 00:47:24 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:24.478811 | orchestrator | 2025-09-04 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:27.522128 | orchestrator | 2025-09-04 00:47:27 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:27.525161 | orchestrator | 2025-09-04 00:47:27 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state STARTED 2025-09-04 00:47:27.526393 | orchestrator | 2025-09-04 00:47:27 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:27.526448 | orchestrator | 2025-09-04 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:30.565813 | orchestrator | 2025-09-04 00:47:30 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:30.570870 | orchestrator | 2025-09-04 00:47:30 | INFO  | Task a22f30d2-6dd4-4d78-a14b-af52e1f562ca is in state SUCCESS 2025-09-04 00:47:30.571050 | orchestrator | 2025-09-04 00:47:30.571072 | orchestrator | 2025-09-04 00:47:30.571084 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-04 00:47:30.571096 | orchestrator | 2025-09-04 00:47:30.571107 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-04 00:47:30.571119 | orchestrator | Thursday 04 September 2025 00:45:25 +0000 (0:00:00.186) 0:00:00.187 **** 2025-09-04 00:47:30.571130 | orchestrator | ok: [testbed-manager] 2025-09-04 00:47:30.571142 | orchestrator | 2025-09-04 00:47:30.571153 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-04 00:47:30.571165 | orchestrator | Thursday 04 September 2025 00:45:26 +0000 (0:00:00.737) 0:00:00.924 **** 2025-09-04 00:47:30.571176 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-04 00:47:30.571208 | orchestrator | 2025-09-04 00:47:30.571220 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-04 00:47:30.571231 | orchestrator | Thursday 04 September 2025 00:45:27 +0000 (0:00:00.529) 0:00:01.454 **** 2025-09-04 00:47:30.571243 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.571254 | orchestrator | 2025-09-04 00:47:30.571265 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-04 00:47:30.571276 | orchestrator | Thursday 04 September 2025 00:45:28 +0000 (0:00:01.000) 0:00:02.455 **** 2025-09-04 00:47:30.571287 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-04 00:47:30.571297 | orchestrator | ok: [testbed-manager] 2025-09-04 00:47:30.571308 | orchestrator | 2025-09-04 00:47:30.571319 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-04 00:47:30.571329 | orchestrator | Thursday 04 September 2025 00:46:14 +0000 (0:00:46.682) 0:00:49.137 **** 2025-09-04 00:47:30.571364 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.571376 | orchestrator | 2025-09-04 00:47:30.571387 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:47:30.571398 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:47:30.571410 | orchestrator | 2025-09-04 00:47:30.571420 | orchestrator | 2025-09-04 00:47:30.571431 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:47:30.571442 | orchestrator | Thursday 04 September 2025 00:46:19 +0000 (0:00:04.299) 0:00:53.436 **** 2025-09-04 00:47:30.571452 | orchestrator | =============================================================================== 2025-09-04 00:47:30.571463 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 46.68s 2025-09-04 00:47:30.571473 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.30s 2025-09-04 00:47:30.571484 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.00s 2025-09-04 00:47:30.571744 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.74s 2025-09-04 00:47:30.571770 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.53s 2025-09-04 00:47:30.571781 | orchestrator | 2025-09-04 00:47:30.574011 | orchestrator | 2025-09-04 00:47:30.574119 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-04 00:47:30.574132 | orchestrator | 2025-09-04 00:47:30.574143 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-04 00:47:30.574155 | orchestrator | Thursday 04 September 2025 00:44:56 +0000 (0:00:00.296) 0:00:00.296 **** 2025-09-04 00:47:30.574167 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:47:30.574180 | orchestrator | 2025-09-04 00:47:30.574191 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-04 00:47:30.574202 | orchestrator | Thursday 04 September 2025 00:44:57 +0000 (0:00:01.427) 0:00:01.724 **** 2025-09-04 00:47:30.574214 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574225 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574237 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574254 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574266 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574277 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574288 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574300 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574312 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574323 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574334 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574345 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574356 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-04 00:47:30.574367 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574379 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574390 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574415 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574426 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-04 00:47:30.574438 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574449 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574460 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-04 00:47:30.574472 | orchestrator | 2025-09-04 00:47:30.574483 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-04 00:47:30.574494 | orchestrator | Thursday 04 September 2025 00:45:02 +0000 (0:00:05.109) 0:00:06.833 **** 2025-09-04 00:47:30.574506 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:47:30.574518 | orchestrator | 2025-09-04 00:47:30.574530 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-04 00:47:30.574543 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:01.445) 0:00:08.279 **** 2025-09-04 00:47:30.574561 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574673 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.574794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574813 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.574953 | orchestrator | 2025-09-04 00:47:30.574964 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-04 00:47:30.574975 | orchestrator | Thursday 04 September 2025 00:45:10 +0000 (0:00:06.702) 0:00:14.981 **** 2025-09-04 00:47:30.574987 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.574999 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575078 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:47:30.575089 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:47:30.575104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575171 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:47:30.575183 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:47:30.575199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575244 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:47:30.575255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575289 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:47:30.575300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575345 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:47:30.575356 | orchestrator | 2025-09-04 00:47:30.575367 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-04 00:47:30.575378 | orchestrator | Thursday 04 September 2025 00:45:12 +0000 (0:00:01.512) 0:00:16.494 **** 2025-09-04 00:47:30.575389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575400 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575462 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:47:30.575479 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:47:30.575503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575541 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:47:30.575552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575688 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:47:30.575698 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:47:30.575717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575751 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:47:30.575761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-04 00:47:30.575771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.575795 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:47:30.575805 | orchestrator | 2025-09-04 00:47:30.575815 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-04 00:47:30.575824 | orchestrator | Thursday 04 September 2025 00:45:16 +0000 (0:00:04.413) 0:00:20.908 **** 2025-09-04 00:47:30.575873 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:47:30.575884 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:47:30.575894 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:47:30.575903 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:47:30.575913 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:47:30.575951 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:47:30.575962 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:47:30.575971 | orchestrator | 2025-09-04 00:47:30.575981 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-04 00:47:30.575991 | orchestrator | Thursday 04 September 2025 00:45:19 +0000 (0:00:02.423) 0:00:23.331 **** 2025-09-04 00:47:30.576001 | orchestrator | skipping: [testbed-manager] 2025-09-04 00:47:30.576010 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:47:30.576020 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:47:30.576029 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:47:30.576038 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:47:30.576048 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:47:30.576057 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:47:30.576067 | orchestrator | 2025-09-04 00:47:30.576076 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-04 00:47:30.576085 | orchestrator | Thursday 04 September 2025 00:45:20 +0000 (0:00:01.764) 0:00:25.096 **** 2025-09-04 00:47:30.576100 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.576224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576243 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576303 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.576358 | orchestrator | 2025-09-04 00:47:30.576368 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-04 00:47:30.576378 | orchestrator | Thursday 04 September 2025 00:45:27 +0000 (0:00:06.452) 0:00:31.548 **** 2025-09-04 00:47:30.576388 | orchestrator | [WARNING]: Skipped 2025-09-04 00:47:30.576398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-04 00:47:30.576407 | orchestrator | to this access issue: 2025-09-04 00:47:30.576417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-04 00:47:30.576426 | orchestrator | directory 2025-09-04 00:47:30.576436 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:47:30.576446 | orchestrator | 2025-09-04 00:47:30.576455 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-04 00:47:30.576464 | orchestrator | Thursday 04 September 2025 00:45:28 +0000 (0:00:00.963) 0:00:32.512 **** 2025-09-04 00:47:30.576474 | orchestrator | [WARNING]: Skipped 2025-09-04 00:47:30.576483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-04 00:47:30.576497 | orchestrator | to this access issue: 2025-09-04 00:47:30.576507 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-04 00:47:30.576516 | orchestrator | directory 2025-09-04 00:47:30.576526 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:47:30.576535 | orchestrator | 2025-09-04 00:47:30.576545 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-04 00:47:30.576554 | orchestrator | Thursday 04 September 2025 00:45:29 +0000 (0:00:01.365) 0:00:33.877 **** 2025-09-04 00:47:30.576564 | orchestrator | [WARNING]: Skipped 2025-09-04 00:47:30.576573 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-04 00:47:30.576582 | orchestrator | to this access issue: 2025-09-04 00:47:30.576609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-04 00:47:30.576619 | orchestrator | directory 2025-09-04 00:47:30.576628 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:47:30.576638 | orchestrator | 2025-09-04 00:47:30.576647 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-04 00:47:30.576656 | orchestrator | Thursday 04 September 2025 00:45:30 +0000 (0:00:00.924) 0:00:34.802 **** 2025-09-04 00:47:30.576666 | orchestrator | [WARNING]: Skipped 2025-09-04 00:47:30.576679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-04 00:47:30.576689 | orchestrator | to this access issue: 2025-09-04 00:47:30.576698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-04 00:47:30.576713 | orchestrator | directory 2025-09-04 00:47:30.576723 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 00:47:30.576732 | orchestrator | 2025-09-04 00:47:30.576742 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-04 00:47:30.576751 | orchestrator | Thursday 04 September 2025 00:45:31 +0000 (0:00:00.861) 0:00:35.663 **** 2025-09-04 00:47:30.576760 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.576770 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.576779 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.576789 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.576798 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.576807 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.576816 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.576826 | orchestrator | 2025-09-04 00:47:30.576835 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-04 00:47:30.576845 | orchestrator | Thursday 04 September 2025 00:45:36 +0000 (0:00:05.416) 0:00:41.080 **** 2025-09-04 00:47:30.576854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576864 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576874 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576893 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576902 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-04 00:47:30.576921 | orchestrator | 2025-09-04 00:47:30.576930 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-04 00:47:30.576940 | orchestrator | Thursday 04 September 2025 00:45:42 +0000 (0:00:05.348) 0:00:46.428 **** 2025-09-04 00:47:30.576949 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.576959 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.576968 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.576977 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.576987 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.576996 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.577005 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.577015 | orchestrator | 2025-09-04 00:47:30.577024 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-04 00:47:30.577034 | orchestrator | Thursday 04 September 2025 00:45:45 +0000 (0:00:03.693) 0:00:50.122 **** 2025-09-04 00:47:30.577044 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577139 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577159 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577179 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577213 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577233 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:47:30.577263 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577279 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577299 | orchestrator | 2025-09-04 00:47:30.577309 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-04 00:47:30.577319 | orchestrator | Thursday 04 September 2025 00:45:49 +0000 (0:00:03.322) 0:00:53.445 **** 2025-09-04 00:47:30.577328 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577338 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577347 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577357 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577366 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577376 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577385 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-04 00:47:30.577395 | orchestrator | 2025-09-04 00:47:30.577404 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-04 00:47:30.577414 | orchestrator | Thursday 04 September 2025 00:45:52 +0000 (0:00:02.784) 0:00:56.229 **** 2025-09-04 00:47:30.577423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577442 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577461 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577475 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577485 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-04 00:47:30.577494 | orchestrator | 2025-09-04 00:47:30.577504 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-04 00:47:30.577513 | orchestrator | Thursday 04 September 2025 00:45:54 +0000 (0:00:02.772) 0:00:59.001 **** 2025-09-04 00:47:30.577523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577694 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-04 00:47:30.577758 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577842 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:47:30.577896 | orchestrator | 2025-09-04 00:47:30.577905 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-04 00:47:30.577915 | orchestrator | Thursday 04 September 2025 00:45:57 +0000 (0:00:02.966) 0:01:01.968 **** 2025-09-04 00:47:30.577930 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.577939 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.577949 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.577958 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.577968 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.577977 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.577986 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.577996 | orchestrator | 2025-09-04 00:47:30.578005 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-04 00:47:30.578015 | orchestrator | Thursday 04 September 2025 00:45:59 +0000 (0:00:02.077) 0:01:04.046 **** 2025-09-04 00:47:30.578080 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.578089 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.578099 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.578108 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.578118 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.578127 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.578137 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.578146 | orchestrator | 2025-09-04 00:47:30.578155 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578165 | orchestrator | Thursday 04 September 2025 00:46:00 +0000 (0:00:01.109) 0:01:05.155 **** 2025-09-04 00:47:30.578174 | orchestrator | 2025-09-04 00:47:30.578184 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578193 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.068) 0:01:05.224 **** 2025-09-04 00:47:30.578203 | orchestrator | 2025-09-04 00:47:30.578212 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578222 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.061) 0:01:05.286 **** 2025-09-04 00:47:30.578231 | orchestrator | 2025-09-04 00:47:30.578241 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578250 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.058) 0:01:05.344 **** 2025-09-04 00:47:30.578259 | orchestrator | 2025-09-04 00:47:30.578269 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578278 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.192) 0:01:05.536 **** 2025-09-04 00:47:30.578289 | orchestrator | 2025-09-04 00:47:30.578300 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578311 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.065) 0:01:05.602 **** 2025-09-04 00:47:30.578322 | orchestrator | 2025-09-04 00:47:30.578333 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-04 00:47:30.578344 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.061) 0:01:05.664 **** 2025-09-04 00:47:30.578355 | orchestrator | 2025-09-04 00:47:30.578366 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-04 00:47:30.578383 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:00.082) 0:01:05.746 **** 2025-09-04 00:47:30.578395 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.578406 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.578417 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.578428 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.578439 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.578450 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.578461 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.578472 | orchestrator | 2025-09-04 00:47:30.578484 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-04 00:47:30.578495 | orchestrator | Thursday 04 September 2025 00:46:43 +0000 (0:00:41.698) 0:01:47.445 **** 2025-09-04 00:47:30.578507 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.578518 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.578529 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.578546 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.578557 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.578568 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.578579 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.578609 | orchestrator | 2025-09-04 00:47:30.578620 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-04 00:47:30.578636 | orchestrator | Thursday 04 September 2025 00:47:17 +0000 (0:00:34.568) 0:02:22.013 **** 2025-09-04 00:47:30.578646 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:47:30.578656 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:47:30.578666 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:47:30.578675 | orchestrator | ok: [testbed-manager] 2025-09-04 00:47:30.578684 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:47:30.578694 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:47:30.578703 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:47:30.578712 | orchestrator | 2025-09-04 00:47:30.578722 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-04 00:47:30.578731 | orchestrator | Thursday 04 September 2025 00:47:20 +0000 (0:00:02.360) 0:02:24.373 **** 2025-09-04 00:47:30.578741 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:47:30.578750 | orchestrator | changed: [testbed-manager] 2025-09-04 00:47:30.578760 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:47:30.578769 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:47:30.578778 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:47:30.578787 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:47:30.578797 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:47:30.578806 | orchestrator | 2025-09-04 00:47:30.578816 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:47:30.578825 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578835 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578845 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578855 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578864 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578874 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578883 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-04 00:47:30.578893 | orchestrator | 2025-09-04 00:47:30.578902 | orchestrator | 2025-09-04 00:47:30.578911 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:47:30.578921 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:09.208) 0:02:33.582 **** 2025-09-04 00:47:30.578931 | orchestrator | =============================================================================== 2025-09-04 00:47:30.578940 | orchestrator | common : Restart fluentd container ------------------------------------- 41.70s 2025-09-04 00:47:30.578950 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.57s 2025-09-04 00:47:30.578959 | orchestrator | common : Restart cron container ----------------------------------------- 9.21s 2025-09-04 00:47:30.578968 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.70s 2025-09-04 00:47:30.578978 | orchestrator | common : Copying over config.json files for services -------------------- 6.45s 2025-09-04 00:47:30.578987 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.42s 2025-09-04 00:47:30.579003 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.35s 2025-09-04 00:47:30.579013 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.11s 2025-09-04 00:47:30.579022 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.41s 2025-09-04 00:47:30.579031 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.69s 2025-09-04 00:47:30.579041 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.32s 2025-09-04 00:47:30.579050 | orchestrator | common : Check common containers ---------------------------------------- 2.97s 2025-09-04 00:47:30.579060 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.78s 2025-09-04 00:47:30.579069 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.77s 2025-09-04 00:47:30.579083 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.42s 2025-09-04 00:47:30.579093 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.36s 2025-09-04 00:47:30.579103 | orchestrator | common : Creating log volume -------------------------------------------- 2.08s 2025-09-04 00:47:30.579112 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.77s 2025-09-04 00:47:30.579122 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.51s 2025-09-04 00:47:30.579131 | orchestrator | common : include_tasks -------------------------------------------------- 1.44s 2025-09-04 00:47:30.579141 | orchestrator | 2025-09-04 00:47:30 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:30.579150 | orchestrator | 2025-09-04 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:33.610878 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:33.611244 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:33.612203 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:33.612973 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task a4fc8de7-0ec8-4d39-a914-825be3852d68 is in state STARTED 2025-09-04 00:47:33.613842 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:33.614641 | orchestrator | 2025-09-04 00:47:33 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:33.614716 | orchestrator | 2025-09-04 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:36.644102 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:36.644346 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:36.645140 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:36.646003 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task a4fc8de7-0ec8-4d39-a914-825be3852d68 is in state STARTED 2025-09-04 00:47:36.646453 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:36.648704 | orchestrator | 2025-09-04 00:47:36 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:36.648727 | orchestrator | 2025-09-04 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:39.676957 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:39.677186 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:39.677924 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:39.678569 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task a4fc8de7-0ec8-4d39-a914-825be3852d68 is in state STARTED 2025-09-04 00:47:39.679157 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:39.679939 | orchestrator | 2025-09-04 00:47:39 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:39.679958 | orchestrator | 2025-09-04 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:42.710278 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:42.710780 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:42.712272 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:42.712901 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task a4fc8de7-0ec8-4d39-a914-825be3852d68 is in state STARTED 2025-09-04 00:47:42.713806 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:42.714727 | orchestrator | 2025-09-04 00:47:42 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:42.714752 | orchestrator | 2025-09-04 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:45.761215 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:45.761294 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:45.761308 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:45.761319 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task a4fc8de7-0ec8-4d39-a914-825be3852d68 is in state SUCCESS 2025-09-04 00:47:45.761330 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:45.761989 | orchestrator | 2025-09-04 00:47:45 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:45.762079 | orchestrator | 2025-09-04 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:48.806883 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:48.806969 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:48.807755 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:48.809769 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:48.809862 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:47:48.810646 | orchestrator | 2025-09-04 00:47:48 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:48.810715 | orchestrator | 2025-09-04 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:51.850210 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:51.850805 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:51.851665 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:51.852542 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:51.853660 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:47:51.854293 | orchestrator | 2025-09-04 00:47:51 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:51.854326 | orchestrator | 2025-09-04 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:54.919768 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:54.922618 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:54.923012 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:54.927028 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:54.929809 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:47:54.932327 | orchestrator | 2025-09-04 00:47:54 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:54.933521 | orchestrator | 2025-09-04 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:47:58.106973 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:47:58.107067 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:47:58.107080 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:47:58.107091 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:47:58.108609 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:47:58.109800 | orchestrator | 2025-09-04 00:47:58 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:47:58.109822 | orchestrator | 2025-09-04 00:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:01.208938 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:01.209037 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:01.209052 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:48:01.209064 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:01.209075 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:01.209086 | orchestrator | 2025-09-04 00:48:01 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:01.209097 | orchestrator | 2025-09-04 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:04.250264 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:04.255798 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:04.261832 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:48:04.265791 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:04.270859 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:04.273099 | orchestrator | 2025-09-04 00:48:04 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:04.274508 | orchestrator | 2025-09-04 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:07.309080 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:07.311070 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:07.313751 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state STARTED 2025-09-04 00:48:07.313844 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:07.315287 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:07.316325 | orchestrator | 2025-09-04 00:48:07 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:07.316413 | orchestrator | 2025-09-04 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:10.374344 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:10.375275 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:10.375905 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task b850e1ab-5594-47f0-92af-ede8e683e24e is in state SUCCESS 2025-09-04 00:48:10.377039 | orchestrator | 2025-09-04 00:48:10.377096 | orchestrator | 2025-09-04 00:48:10.377116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:48:10.377135 | orchestrator | 2025-09-04 00:48:10.377153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:48:10.377170 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:00.310) 0:00:00.310 **** 2025-09-04 00:48:10.377187 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:10.377205 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:10.377222 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:10.377238 | orchestrator | 2025-09-04 00:48:10.377255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:48:10.377273 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:00.395) 0:00:00.706 **** 2025-09-04 00:48:10.377291 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-04 00:48:10.377308 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-04 00:48:10.377325 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-04 00:48:10.377343 | orchestrator | 2025-09-04 00:48:10.377360 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-04 00:48:10.377376 | orchestrator | 2025-09-04 00:48:10.377394 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-04 00:48:10.377412 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.481) 0:00:01.188 **** 2025-09-04 00:48:10.377428 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:48:10.377446 | orchestrator | 2025-09-04 00:48:10.377462 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-04 00:48:10.377478 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.767) 0:00:01.956 **** 2025-09-04 00:48:10.377563 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-04 00:48:10.377584 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-04 00:48:10.377600 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-04 00:48:10.377617 | orchestrator | 2025-09-04 00:48:10.377634 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-04 00:48:10.377652 | orchestrator | Thursday 04 September 2025 00:47:38 +0000 (0:00:00.836) 0:00:02.792 **** 2025-09-04 00:48:10.377670 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-04 00:48:10.377689 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-04 00:48:10.377707 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-04 00:48:10.377725 | orchestrator | 2025-09-04 00:48:10.377743 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-04 00:48:10.377761 | orchestrator | Thursday 04 September 2025 00:47:40 +0000 (0:00:01.777) 0:00:04.570 **** 2025-09-04 00:48:10.377778 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:10.377796 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:10.377815 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:10.377832 | orchestrator | 2025-09-04 00:48:10.377851 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-04 00:48:10.377869 | orchestrator | Thursday 04 September 2025 00:47:42 +0000 (0:00:01.902) 0:00:06.472 **** 2025-09-04 00:48:10.377888 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:10.377906 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:10.377924 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:10.377940 | orchestrator | 2025-09-04 00:48:10.377959 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:48:10.377978 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.377999 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.378207 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.378235 | orchestrator | 2025-09-04 00:48:10.378252 | orchestrator | 2025-09-04 00:48:10.378270 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:48:10.378288 | orchestrator | Thursday 04 September 2025 00:47:45 +0000 (0:00:02.611) 0:00:09.084 **** 2025-09-04 00:48:10.378305 | orchestrator | =============================================================================== 2025-09-04 00:48:10.378322 | orchestrator | memcached : Restart memcached container --------------------------------- 2.61s 2025-09-04 00:48:10.378339 | orchestrator | memcached : Check memcached container ----------------------------------- 1.90s 2025-09-04 00:48:10.378356 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.78s 2025-09-04 00:48:10.378373 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.84s 2025-09-04 00:48:10.378390 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.77s 2025-09-04 00:48:10.378408 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-09-04 00:48:10.378444 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-09-04 00:48:10.378463 | orchestrator | 2025-09-04 00:48:10.378481 | orchestrator | 2025-09-04 00:48:10.378497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:48:10.378514 | orchestrator | 2025-09-04 00:48:10.378556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:48:10.378573 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:00.340) 0:00:00.340 **** 2025-09-04 00:48:10.378590 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:10.378606 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:10.378639 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:10.378657 | orchestrator | 2025-09-04 00:48:10.378675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:48:10.378713 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.464) 0:00:00.805 **** 2025-09-04 00:48:10.378732 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-04 00:48:10.378748 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-04 00:48:10.378766 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-04 00:48:10.378783 | orchestrator | 2025-09-04 00:48:10.378800 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-04 00:48:10.378817 | orchestrator | 2025-09-04 00:48:10.378835 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-04 00:48:10.378853 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.518) 0:00:01.323 **** 2025-09-04 00:48:10.378867 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:48:10.378882 | orchestrator | 2025-09-04 00:48:10.378897 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-04 00:48:10.378911 | orchestrator | Thursday 04 September 2025 00:47:38 +0000 (0:00:00.589) 0:00:01.913 **** 2025-09-04 00:48:10.378927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.378948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.378974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.378990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379065 | orchestrator | 2025-09-04 00:48:10.379078 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-04 00:48:10.379093 | orchestrator | Thursday 04 September 2025 00:47:39 +0000 (0:00:01.219) 0:00:03.133 **** 2025-09-04 00:48:10.379107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379215 | orchestrator | 2025-09-04 00:48:10.379228 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-04 00:48:10.379241 | orchestrator | Thursday 04 September 2025 00:47:42 +0000 (0:00:03.449) 0:00:06.583 **** 2025-09-04 00:48:10.379254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379357 | orchestrator | 2025-09-04 00:48:10.379377 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-04 00:48:10.379393 | orchestrator | Thursday 04 September 2025 00:47:46 +0000 (0:00:03.133) 0:00:09.716 **** 2025-09-04 00:48:10.379407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-04 00:48:10.379502 | orchestrator | 2025-09-04 00:48:10.379510 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-04 00:48:10.379518 | orchestrator | Thursday 04 September 2025 00:47:48 +0000 (0:00:02.090) 0:00:11.808 **** 2025-09-04 00:48:10.379543 | orchestrator | 2025-09-04 00:48:10.379551 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-04 00:48:10.379564 | orchestrator | Thursday 04 September 2025 00:47:48 +0000 (0:00:00.276) 0:00:12.085 **** 2025-09-04 00:48:10.379572 | orchestrator | 2025-09-04 00:48:10.379580 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-04 00:48:10.379588 | orchestrator | Thursday 04 September 2025 00:47:48 +0000 (0:00:00.187) 0:00:12.273 **** 2025-09-04 00:48:10.379595 | orchestrator | 2025-09-04 00:48:10.379603 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-04 00:48:10.379611 | orchestrator | Thursday 04 September 2025 00:47:49 +0000 (0:00:00.412) 0:00:12.686 **** 2025-09-04 00:48:10.379626 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:10.379640 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:10.379655 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:10.379669 | orchestrator | 2025-09-04 00:48:10.379683 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-04 00:48:10.379697 | orchestrator | Thursday 04 September 2025 00:47:58 +0000 (0:00:09.558) 0:00:22.244 **** 2025-09-04 00:48:10.379712 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:10.379726 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:10.379740 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:10.379755 | orchestrator | 2025-09-04 00:48:10.379769 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:48:10.379783 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.379801 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.379815 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:10.379831 | orchestrator | 2025-09-04 00:48:10.379846 | orchestrator | 2025-09-04 00:48:10.379859 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:48:10.379872 | orchestrator | Thursday 04 September 2025 00:48:08 +0000 (0:00:10.282) 0:00:32.527 **** 2025-09-04 00:48:10.379887 | orchestrator | =============================================================================== 2025-09-04 00:48:10.379903 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.28s 2025-09-04 00:48:10.379919 | orchestrator | redis : Restart redis container ----------------------------------------- 9.56s 2025-09-04 00:48:10.379933 | orchestrator | redis : Copying over default config.json files -------------------------- 3.45s 2025-09-04 00:48:10.379946 | orchestrator | redis : Copying over redis config files --------------------------------- 3.13s 2025-09-04 00:48:10.379959 | orchestrator | redis : Check redis containers ------------------------------------------ 2.09s 2025-09-04 00:48:10.379972 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.22s 2025-09-04 00:48:10.379985 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.88s 2025-09-04 00:48:10.379998 | orchestrator | redis : include_tasks --------------------------------------------------- 0.59s 2025-09-04 00:48:10.380011 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-09-04 00:48:10.380033 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-09-04 00:48:10.380047 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:10.380060 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:10.380243 | orchestrator | 2025-09-04 00:48:10 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:10.380260 | orchestrator | 2025-09-04 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:13.417375 | orchestrator | 2025-09-04 00:48:13 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:13.417474 | orchestrator | 2025-09-04 00:48:13 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:13.417498 | orchestrator | 2025-09-04 00:48:13 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:13.418132 | orchestrator | 2025-09-04 00:48:13 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:13.418665 | orchestrator | 2025-09-04 00:48:13 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:13.418687 | orchestrator | 2025-09-04 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:16.509387 | orchestrator | 2025-09-04 00:48:16 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:16.509481 | orchestrator | 2025-09-04 00:48:16 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:16.512612 | orchestrator | 2025-09-04 00:48:16 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:16.513389 | orchestrator | 2025-09-04 00:48:16 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:16.515204 | orchestrator | 2025-09-04 00:48:16 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:16.515233 | orchestrator | 2025-09-04 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:19.613915 | orchestrator | 2025-09-04 00:48:19 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:19.614005 | orchestrator | 2025-09-04 00:48:19 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:19.614070 | orchestrator | 2025-09-04 00:48:19 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:19.614083 | orchestrator | 2025-09-04 00:48:19 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:19.614094 | orchestrator | 2025-09-04 00:48:19 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:19.614134 | orchestrator | 2025-09-04 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:22.592061 | orchestrator | 2025-09-04 00:48:22 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:22.598151 | orchestrator | 2025-09-04 00:48:22 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:22.603691 | orchestrator | 2025-09-04 00:48:22 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:22.604451 | orchestrator | 2025-09-04 00:48:22 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:22.607952 | orchestrator | 2025-09-04 00:48:22 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:22.608048 | orchestrator | 2025-09-04 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:25.651684 | orchestrator | 2025-09-04 00:48:25 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:25.653646 | orchestrator | 2025-09-04 00:48:25 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:25.654232 | orchestrator | 2025-09-04 00:48:25 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:25.655816 | orchestrator | 2025-09-04 00:48:25 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:25.656435 | orchestrator | 2025-09-04 00:48:25 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:25.657399 | orchestrator | 2025-09-04 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:28.697548 | orchestrator | 2025-09-04 00:48:28 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:28.697870 | orchestrator | 2025-09-04 00:48:28 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:28.698826 | orchestrator | 2025-09-04 00:48:28 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:28.699604 | orchestrator | 2025-09-04 00:48:28 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:28.700623 | orchestrator | 2025-09-04 00:48:28 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:28.700647 | orchestrator | 2025-09-04 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:31.741585 | orchestrator | 2025-09-04 00:48:31 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:31.743187 | orchestrator | 2025-09-04 00:48:31 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:31.744367 | orchestrator | 2025-09-04 00:48:31 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:31.746170 | orchestrator | 2025-09-04 00:48:31 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:31.747332 | orchestrator | 2025-09-04 00:48:31 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:31.747360 | orchestrator | 2025-09-04 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:34.788637 | orchestrator | 2025-09-04 00:48:34 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:34.788716 | orchestrator | 2025-09-04 00:48:34 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:34.789019 | orchestrator | 2025-09-04 00:48:34 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:34.789947 | orchestrator | 2025-09-04 00:48:34 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:34.790596 | orchestrator | 2025-09-04 00:48:34 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:34.790853 | orchestrator | 2025-09-04 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:37.831377 | orchestrator | 2025-09-04 00:48:37 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:37.831912 | orchestrator | 2025-09-04 00:48:37 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:37.832891 | orchestrator | 2025-09-04 00:48:37 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:37.833765 | orchestrator | 2025-09-04 00:48:37 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:37.836056 | orchestrator | 2025-09-04 00:48:37 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:37.836092 | orchestrator | 2025-09-04 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:41.219878 | orchestrator | 2025-09-04 00:48:41 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:41.219974 | orchestrator | 2025-09-04 00:48:41 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:41.221211 | orchestrator | 2025-09-04 00:48:41 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:41.221693 | orchestrator | 2025-09-04 00:48:41 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:41.222575 | orchestrator | 2025-09-04 00:48:41 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:41.222600 | orchestrator | 2025-09-04 00:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:44.328337 | orchestrator | 2025-09-04 00:48:44 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:44.328417 | orchestrator | 2025-09-04 00:48:44 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:44.328427 | orchestrator | 2025-09-04 00:48:44 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state STARTED 2025-09-04 00:48:44.328435 | orchestrator | 2025-09-04 00:48:44 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:44.328443 | orchestrator | 2025-09-04 00:48:44 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:44.328450 | orchestrator | 2025-09-04 00:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:47.344449 | orchestrator | 2025-09-04 00:48:47 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:47.345266 | orchestrator | 2025-09-04 00:48:47 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:47.347405 | orchestrator | 2025-09-04 00:48:47 | INFO  | Task 9d6c82b2-d94a-477e-82e4-76df14ab863e is in state SUCCESS 2025-09-04 00:48:47.348449 | orchestrator | 2025-09-04 00:48:47.348506 | orchestrator | 2025-09-04 00:48:47.348519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:48:47.348531 | orchestrator | 2025-09-04 00:48:47.348542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:48:47.348553 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-04 00:48:47.348564 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:47.348577 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:47.348588 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:47.348598 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:47.348634 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:47.348645 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:47.348656 | orchestrator | 2025-09-04 00:48:47.348667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:48:47.348677 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.899) 0:00:01.161 **** 2025-09-04 00:48:47.348688 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348699 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348709 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348720 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348731 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348742 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-04 00:48:47.348752 | orchestrator | 2025-09-04 00:48:47.348763 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-04 00:48:47.348774 | orchestrator | 2025-09-04 00:48:47.348784 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-04 00:48:47.348795 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.867) 0:00:02.029 **** 2025-09-04 00:48:47.348807 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:48:47.348819 | orchestrator | 2025-09-04 00:48:47.348829 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-04 00:48:47.348840 | orchestrator | Thursday 04 September 2025 00:47:39 +0000 (0:00:01.632) 0:00:03.662 **** 2025-09-04 00:48:47.348851 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-04 00:48:47.348862 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-04 00:48:47.348873 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-04 00:48:47.348884 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-04 00:48:47.348894 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-04 00:48:47.348905 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-04 00:48:47.348916 | orchestrator | 2025-09-04 00:48:47.348926 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-04 00:48:47.348937 | orchestrator | Thursday 04 September 2025 00:47:41 +0000 (0:00:01.628) 0:00:05.291 **** 2025-09-04 00:48:47.348948 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-04 00:48:47.348959 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-04 00:48:47.348969 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-04 00:48:47.348980 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-04 00:48:47.348990 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-04 00:48:47.349001 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-04 00:48:47.349012 | orchestrator | 2025-09-04 00:48:47.349022 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-04 00:48:47.349033 | orchestrator | Thursday 04 September 2025 00:47:43 +0000 (0:00:01.892) 0:00:07.183 **** 2025-09-04 00:48:47.349044 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-04 00:48:47.349055 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:47.349068 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-04 00:48:47.349081 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:47.349093 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-04 00:48:47.349106 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:47.349119 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-04 00:48:47.349131 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:47.349155 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-04 00:48:47.349167 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:47.349179 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-04 00:48:47.349192 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:47.349205 | orchestrator | 2025-09-04 00:48:47.349218 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-04 00:48:47.349230 | orchestrator | Thursday 04 September 2025 00:47:44 +0000 (0:00:01.627) 0:00:08.811 **** 2025-09-04 00:48:47.349243 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:47.349255 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:47.349267 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:47.349279 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:47.349291 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:47.349304 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:47.349317 | orchestrator | 2025-09-04 00:48:47.349338 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-04 00:48:47.349363 | orchestrator | Thursday 04 September 2025 00:47:45 +0000 (0:00:01.109) 0:00:09.921 **** 2025-09-04 00:48:47.349394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349774 | orchestrator | 2025-09-04 00:48:47.349786 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-04 00:48:47.349797 | orchestrator | Thursday 04 September 2025 00:47:48 +0000 (0:00:02.350) 0:00:12.271 **** 2025-09-04 00:48:47.349810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.349990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350002 | orchestrator | 2025-09-04 00:48:47.350014 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-04 00:48:47.350078 | orchestrator | Thursday 04 September 2025 00:47:52 +0000 (0:00:03.972) 0:00:16.244 **** 2025-09-04 00:48:47.350090 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:47.350101 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:47.350112 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:47.350122 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:47.350133 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:47.350143 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:47.350154 | orchestrator | 2025-09-04 00:48:47.350165 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-04 00:48:47.350175 | orchestrator | Thursday 04 September 2025 00:47:53 +0000 (0:00:01.485) 0:00:17.729 **** 2025-09-04 00:48:47.350187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-04 00:48:47.350380 | orchestrator | 2025-09-04 00:48:47.350392 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350405 | orchestrator | Thursday 04 September 2025 00:47:55 +0000 (0:00:02.098) 0:00:19.828 **** 2025-09-04 00:48:47.350418 | orchestrator | 2025-09-04 00:48:47.350430 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350442 | orchestrator | Thursday 04 September 2025 00:47:55 +0000 (0:00:00.251) 0:00:20.079 **** 2025-09-04 00:48:47.350455 | orchestrator | 2025-09-04 00:48:47.350466 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350506 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.134) 0:00:20.214 **** 2025-09-04 00:48:47.350520 | orchestrator | 2025-09-04 00:48:47.350532 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350545 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.123) 0:00:20.338 **** 2025-09-04 00:48:47.350557 | orchestrator | 2025-09-04 00:48:47.350570 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350582 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.227) 0:00:20.566 **** 2025-09-04 00:48:47.350594 | orchestrator | 2025-09-04 00:48:47.350606 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-04 00:48:47.350619 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.175) 0:00:20.741 **** 2025-09-04 00:48:47.350631 | orchestrator | 2025-09-04 00:48:47.350644 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-04 00:48:47.350656 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.205) 0:00:20.947 **** 2025-09-04 00:48:47.350668 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:47.350681 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:47.350694 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:47.350705 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:47.350716 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:47.350727 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:47.350737 | orchestrator | 2025-09-04 00:48:47.350748 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-04 00:48:47.350759 | orchestrator | Thursday 04 September 2025 00:48:08 +0000 (0:00:12.092) 0:00:33.039 **** 2025-09-04 00:48:47.350770 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:47.350781 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:47.350791 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:47.350802 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:47.350813 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:47.350823 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:47.350834 | orchestrator | 2025-09-04 00:48:47.350845 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-04 00:48:47.350855 | orchestrator | Thursday 04 September 2025 00:48:11 +0000 (0:00:02.271) 0:00:35.311 **** 2025-09-04 00:48:47.350866 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:47.350877 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:47.350887 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:47.350898 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:47.350909 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:47.350919 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:47.350930 | orchestrator | 2025-09-04 00:48:47.350941 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-04 00:48:47.350952 | orchestrator | Thursday 04 September 2025 00:48:21 +0000 (0:00:10.671) 0:00:45.983 **** 2025-09-04 00:48:47.350963 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-04 00:48:47.350974 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-04 00:48:47.350989 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-04 00:48:47.351007 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-04 00:48:47.351018 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-04 00:48:47.351035 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-04 00:48:47.351046 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-04 00:48:47.351057 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-04 00:48:47.351067 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-04 00:48:47.351078 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-04 00:48:47.351089 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-04 00:48:47.351099 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-04 00:48:47.351110 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351120 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351131 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351141 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351152 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351162 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-04 00:48:47.351173 | orchestrator | 2025-09-04 00:48:47.351184 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-04 00:48:47.351194 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:07.692) 0:00:53.676 **** 2025-09-04 00:48:47.351205 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-04 00:48:47.351216 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:47.351226 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-04 00:48:47.351237 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:47.351247 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-04 00:48:47.351258 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:47.351269 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-04 00:48:47.351279 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-04 00:48:47.351290 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-04 00:48:47.351300 | orchestrator | 2025-09-04 00:48:47.351311 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-04 00:48:47.351322 | orchestrator | Thursday 04 September 2025 00:48:32 +0000 (0:00:02.978) 0:00:56.654 **** 2025-09-04 00:48:47.351333 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-04 00:48:47.351343 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:47.351354 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-04 00:48:47.351365 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:47.351376 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-04 00:48:47.351386 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:47.351397 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-04 00:48:47.351408 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-04 00:48:47.351425 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-04 00:48:47.351436 | orchestrator | 2025-09-04 00:48:47.351447 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-04 00:48:47.351457 | orchestrator | Thursday 04 September 2025 00:48:35 +0000 (0:00:03.287) 0:00:59.941 **** 2025-09-04 00:48:47.351468 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:47.351529 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:47.351540 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:47.351551 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:47.351562 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:47.351572 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:47.351583 | orchestrator | 2025-09-04 00:48:47.351593 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:48:47.351604 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:48:47.351615 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:48:47.351626 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:48:47.351642 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 00:48:47.351653 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 00:48:47.351671 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 00:48:47.351682 | orchestrator | 2025-09-04 00:48:47.351692 | orchestrator | 2025-09-04 00:48:47.351703 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:48:47.351714 | orchestrator | Thursday 04 September 2025 00:48:46 +0000 (0:00:10.356) 0:01:10.298 **** 2025-09-04 00:48:47.351724 | orchestrator | =============================================================================== 2025-09-04 00:48:47.351735 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.03s 2025-09-04 00:48:47.351746 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.09s 2025-09-04 00:48:47.351756 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.69s 2025-09-04 00:48:47.351767 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.97s 2025-09-04 00:48:47.351777 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.29s 2025-09-04 00:48:47.351788 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.98s 2025-09-04 00:48:47.351798 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.35s 2025-09-04 00:48:47.351809 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.27s 2025-09-04 00:48:47.351819 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.10s 2025-09-04 00:48:47.351829 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.89s 2025-09-04 00:48:47.351840 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.63s 2025-09-04 00:48:47.351851 | orchestrator | module-load : Load modules ---------------------------------------------- 1.63s 2025-09-04 00:48:47.351861 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.63s 2025-09-04 00:48:47.351872 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.49s 2025-09-04 00:48:47.351882 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.12s 2025-09-04 00:48:47.351900 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.11s 2025-09-04 00:48:47.351910 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-09-04 00:48:47.351921 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-09-04 00:48:47.351932 | orchestrator | 2025-09-04 00:48:47 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:47.351943 | orchestrator | 2025-09-04 00:48:47 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:47.351954 | orchestrator | 2025-09-04 00:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:50.382971 | orchestrator | 2025-09-04 00:48:50 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state STARTED 2025-09-04 00:48:50.383583 | orchestrator | 2025-09-04 00:48:50 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:50.384637 | orchestrator | 2025-09-04 00:48:50 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:48:50.386104 | orchestrator | 2025-09-04 00:48:50 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:50.387207 | orchestrator | 2025-09-04 00:48:50 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:50.387429 | orchestrator | 2025-09-04 00:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:53.427970 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task ebb700e7-dd73-4339-a8f4-fce85957a545 is in state STARTED 2025-09-04 00:48:53.430435 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task e2a0f5bd-c4a3-4221-94bb-c536b75ec790 is in state SUCCESS 2025-09-04 00:48:53.431632 | orchestrator | 2025-09-04 00:48:53.431678 | orchestrator | 2025-09-04 00:48:53.431691 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-04 00:48:53.431703 | orchestrator | 2025-09-04 00:48:53.431713 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-04 00:48:53.431725 | orchestrator | Thursday 04 September 2025 00:44:56 +0000 (0:00:00.230) 0:00:00.230 **** 2025-09-04 00:48:53.431736 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.431748 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.431759 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.431769 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.431780 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.431790 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.431801 | orchestrator | 2025-09-04 00:48:53.431812 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-04 00:48:53.431823 | orchestrator | Thursday 04 September 2025 00:44:57 +0000 (0:00:00.952) 0:00:01.182 **** 2025-09-04 00:48:53.431833 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.431863 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.431874 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.431885 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.431895 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.431906 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.431917 | orchestrator | 2025-09-04 00:48:53.431927 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-04 00:48:53.431938 | orchestrator | Thursday 04 September 2025 00:44:58 +0000 (0:00:00.804) 0:00:01.987 **** 2025-09-04 00:48:53.431949 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.431960 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.431970 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.431981 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.431991 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.432002 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.432013 | orchestrator | 2025-09-04 00:48:53.432023 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-04 00:48:53.432063 | orchestrator | Thursday 04 September 2025 00:44:59 +0000 (0:00:00.886) 0:00:02.873 **** 2025-09-04 00:48:53.432074 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.432084 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.432095 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.432106 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.432116 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.432127 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.432138 | orchestrator | 2025-09-04 00:48:53.432149 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-04 00:48:53.432160 | orchestrator | Thursday 04 September 2025 00:45:02 +0000 (0:00:03.222) 0:00:06.095 **** 2025-09-04 00:48:53.432170 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.432180 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.432191 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.432202 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.432214 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.432228 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.432240 | orchestrator | 2025-09-04 00:48:53.432253 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-04 00:48:53.432265 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:01.284) 0:00:07.380 **** 2025-09-04 00:48:53.432278 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.432291 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.432304 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.432316 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.432329 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.432341 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.432354 | orchestrator | 2025-09-04 00:48:53.432366 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-04 00:48:53.432379 | orchestrator | Thursday 04 September 2025 00:45:05 +0000 (0:00:01.153) 0:00:08.534 **** 2025-09-04 00:48:53.432392 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.432404 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.432416 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.432429 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.432441 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.432454 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.432529 | orchestrator | 2025-09-04 00:48:53.432544 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-04 00:48:53.432557 | orchestrator | Thursday 04 September 2025 00:45:06 +0000 (0:00:01.187) 0:00:09.722 **** 2025-09-04 00:48:53.432569 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.432579 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.432590 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.432601 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.432611 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.432622 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.432632 | orchestrator | 2025-09-04 00:48:53.432643 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-04 00:48:53.432654 | orchestrator | Thursday 04 September 2025 00:45:07 +0000 (0:00:00.948) 0:00:10.671 **** 2025-09-04 00:48:53.432665 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432675 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432686 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.432697 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432707 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432718 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.432729 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432748 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432759 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.432770 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432796 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432807 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.432818 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432829 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432839 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.432849 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 00:48:53.432859 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 00:48:53.432868 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.432878 | orchestrator | 2025-09-04 00:48:53.432887 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-04 00:48:53.432897 | orchestrator | Thursday 04 September 2025 00:45:08 +0000 (0:00:01.198) 0:00:11.870 **** 2025-09-04 00:48:53.432912 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.432922 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.432931 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.432940 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.432950 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.432959 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.432968 | orchestrator | 2025-09-04 00:48:53.432978 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-04 00:48:53.432989 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:01.240) 0:00:13.110 **** 2025-09-04 00:48:53.432998 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.433008 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.433017 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.433027 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433036 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433045 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433055 | orchestrator | 2025-09-04 00:48:53.433064 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-04 00:48:53.433074 | orchestrator | Thursday 04 September 2025 00:45:11 +0000 (0:00:01.272) 0:00:14.383 **** 2025-09-04 00:48:53.433083 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.433092 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.433102 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.433111 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.433120 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.433130 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.433139 | orchestrator | 2025-09-04 00:48:53.433149 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-04 00:48:53.433158 | orchestrator | Thursday 04 September 2025 00:45:17 +0000 (0:00:06.085) 0:00:20.469 **** 2025-09-04 00:48:53.433167 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.433177 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.433186 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.433196 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.433205 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.433214 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.433224 | orchestrator | 2025-09-04 00:48:53.433233 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-04 00:48:53.433243 | orchestrator | Thursday 04 September 2025 00:45:19 +0000 (0:00:01.769) 0:00:22.238 **** 2025-09-04 00:48:53.433252 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.433262 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.433271 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.433286 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.433296 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.433305 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.433314 | orchestrator | 2025-09-04 00:48:53.433324 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-04 00:48:53.433335 | orchestrator | Thursday 04 September 2025 00:45:22 +0000 (0:00:03.099) 0:00:25.338 **** 2025-09-04 00:48:53.433345 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.433354 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.433364 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.433373 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433382 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433391 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433401 | orchestrator | 2025-09-04 00:48:53.433410 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-04 00:48:53.433420 | orchestrator | Thursday 04 September 2025 00:45:23 +0000 (0:00:01.641) 0:00:26.979 **** 2025-09-04 00:48:53.433429 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-04 00:48:53.433439 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-04 00:48:53.433449 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-04 00:48:53.433458 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-04 00:48:53.433483 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-04 00:48:53.433493 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-04 00:48:53.433502 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-04 00:48:53.433511 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-04 00:48:53.433521 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-04 00:48:53.433530 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-04 00:48:53.433539 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-04 00:48:53.433549 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-04 00:48:53.433558 | orchestrator | 2025-09-04 00:48:53.433567 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-04 00:48:53.433577 | orchestrator | Thursday 04 September 2025 00:45:26 +0000 (0:00:02.530) 0:00:29.510 **** 2025-09-04 00:48:53.433586 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.433596 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.433605 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.433615 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.433624 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.433633 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.433643 | orchestrator | 2025-09-04 00:48:53.433657 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-04 00:48:53.433667 | orchestrator | 2025-09-04 00:48:53.433677 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-04 00:48:53.433686 | orchestrator | Thursday 04 September 2025 00:45:28 +0000 (0:00:01.763) 0:00:31.273 **** 2025-09-04 00:48:53.433696 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433705 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433714 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433724 | orchestrator | 2025-09-04 00:48:53.433733 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-04 00:48:53.433743 | orchestrator | Thursday 04 September 2025 00:45:29 +0000 (0:00:01.509) 0:00:32.783 **** 2025-09-04 00:48:53.433752 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433762 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433771 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433780 | orchestrator | 2025-09-04 00:48:53.433790 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-04 00:48:53.433804 | orchestrator | Thursday 04 September 2025 00:45:30 +0000 (0:00:01.055) 0:00:33.838 **** 2025-09-04 00:48:53.433820 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433829 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433839 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433848 | orchestrator | 2025-09-04 00:48:53.433858 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-04 00:48:53.433867 | orchestrator | Thursday 04 September 2025 00:45:31 +0000 (0:00:00.997) 0:00:34.836 **** 2025-09-04 00:48:53.433877 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.433886 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433895 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.433905 | orchestrator | 2025-09-04 00:48:53.433914 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-04 00:48:53.433924 | orchestrator | Thursday 04 September 2025 00:45:32 +0000 (0:00:01.203) 0:00:36.039 **** 2025-09-04 00:48:53.433933 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.433942 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.433952 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.433961 | orchestrator | 2025-09-04 00:48:53.433971 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-04 00:48:53.433980 | orchestrator | Thursday 04 September 2025 00:45:33 +0000 (0:00:00.473) 0:00:36.512 **** 2025-09-04 00:48:53.433990 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.433999 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.434008 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.434070 | orchestrator | 2025-09-04 00:48:53.434083 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-04 00:48:53.434093 | orchestrator | Thursday 04 September 2025 00:45:34 +0000 (0:00:01.446) 0:00:37.959 **** 2025-09-04 00:48:53.434102 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.434111 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434121 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.434130 | orchestrator | 2025-09-04 00:48:53.434140 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-04 00:48:53.434149 | orchestrator | Thursday 04 September 2025 00:45:36 +0000 (0:00:01.909) 0:00:39.868 **** 2025-09-04 00:48:53.434159 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:48:53.434168 | orchestrator | 2025-09-04 00:48:53.434178 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-04 00:48:53.434187 | orchestrator | Thursday 04 September 2025 00:45:38 +0000 (0:00:01.758) 0:00:41.627 **** 2025-09-04 00:48:53.434196 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.434205 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.434215 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.434224 | orchestrator | 2025-09-04 00:48:53.434234 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-04 00:48:53.434243 | orchestrator | Thursday 04 September 2025 00:45:41 +0000 (0:00:03.515) 0:00:45.143 **** 2025-09-04 00:48:53.434253 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434262 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434271 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434281 | orchestrator | 2025-09-04 00:48:53.434290 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-04 00:48:53.434300 | orchestrator | Thursday 04 September 2025 00:45:42 +0000 (0:00:00.711) 0:00:45.854 **** 2025-09-04 00:48:53.434309 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434319 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434328 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434338 | orchestrator | 2025-09-04 00:48:53.434347 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-04 00:48:53.434356 | orchestrator | Thursday 04 September 2025 00:45:44 +0000 (0:00:01.787) 0:00:47.642 **** 2025-09-04 00:48:53.434366 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434375 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434385 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434401 | orchestrator | 2025-09-04 00:48:53.434410 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-04 00:48:53.434420 | orchestrator | Thursday 04 September 2025 00:45:45 +0000 (0:00:01.452) 0:00:49.095 **** 2025-09-04 00:48:53.434429 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.434439 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434448 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434457 | orchestrator | 2025-09-04 00:48:53.434487 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-04 00:48:53.434497 | orchestrator | Thursday 04 September 2025 00:45:46 +0000 (0:00:00.380) 0:00:49.475 **** 2025-09-04 00:48:53.434506 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.434516 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434525 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434535 | orchestrator | 2025-09-04 00:48:53.434544 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-04 00:48:53.434553 | orchestrator | Thursday 04 September 2025 00:45:46 +0000 (0:00:00.584) 0:00:50.059 **** 2025-09-04 00:48:53.434563 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434572 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.434582 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.434591 | orchestrator | 2025-09-04 00:48:53.434607 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-04 00:48:53.434617 | orchestrator | Thursday 04 September 2025 00:45:49 +0000 (0:00:02.646) 0:00:52.706 **** 2025-09-04 00:48:53.434627 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-04 00:48:53.434637 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-04 00:48:53.434647 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-04 00:48:53.434661 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-04 00:48:53.434672 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-04 00:48:53.434681 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-04 00:48:53.434691 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-04 00:48:53.434700 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-04 00:48:53.434710 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-04 00:48:53.434719 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-04 00:48:53.434729 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-04 00:48:53.434738 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-04 00:48:53.434748 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-04 00:48:53.434757 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-04 00:48:53.434767 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-04 00:48:53.434781 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.434791 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.434800 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.434810 | orchestrator | 2025-09-04 00:48:53.434819 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-04 00:48:53.434829 | orchestrator | Thursday 04 September 2025 00:46:45 +0000 (0:00:55.796) 0:01:48.502 **** 2025-09-04 00:48:53.434838 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.434848 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.434857 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.434867 | orchestrator | 2025-09-04 00:48:53.434876 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-04 00:48:53.434886 | orchestrator | Thursday 04 September 2025 00:46:45 +0000 (0:00:00.395) 0:01:48.898 **** 2025-09-04 00:48:53.434895 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434905 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.434914 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.434924 | orchestrator | 2025-09-04 00:48:53.434933 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-04 00:48:53.434943 | orchestrator | Thursday 04 September 2025 00:46:46 +0000 (0:00:01.194) 0:01:50.092 **** 2025-09-04 00:48:53.434952 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.434962 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.434971 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.434980 | orchestrator | 2025-09-04 00:48:53.434990 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-04 00:48:53.434999 | orchestrator | Thursday 04 September 2025 00:46:48 +0000 (0:00:01.472) 0:01:51.564 **** 2025-09-04 00:48:53.435009 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435018 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435028 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435037 | orchestrator | 2025-09-04 00:48:53.435047 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-04 00:48:53.435056 | orchestrator | Thursday 04 September 2025 00:47:14 +0000 (0:00:26.278) 0:02:17.843 **** 2025-09-04 00:48:53.435065 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435075 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435084 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435094 | orchestrator | 2025-09-04 00:48:53.435103 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-04 00:48:53.435112 | orchestrator | Thursday 04 September 2025 00:47:15 +0000 (0:00:00.791) 0:02:18.635 **** 2025-09-04 00:48:53.435122 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435131 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435141 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435150 | orchestrator | 2025-09-04 00:48:53.435164 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-04 00:48:53.435174 | orchestrator | Thursday 04 September 2025 00:47:16 +0000 (0:00:00.660) 0:02:19.295 **** 2025-09-04 00:48:53.435183 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435193 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435202 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435211 | orchestrator | 2025-09-04 00:48:53.435221 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-04 00:48:53.435230 | orchestrator | Thursday 04 September 2025 00:47:16 +0000 (0:00:00.638) 0:02:19.933 **** 2025-09-04 00:48:53.435240 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435249 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435259 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435268 | orchestrator | 2025-09-04 00:48:53.435277 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-04 00:48:53.435287 | orchestrator | Thursday 04 September 2025 00:47:17 +0000 (0:00:00.868) 0:02:20.802 **** 2025-09-04 00:48:53.435302 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435312 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435321 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435331 | orchestrator | 2025-09-04 00:48:53.435340 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-04 00:48:53.435350 | orchestrator | Thursday 04 September 2025 00:47:17 +0000 (0:00:00.305) 0:02:21.108 **** 2025-09-04 00:48:53.435359 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435369 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435378 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435388 | orchestrator | 2025-09-04 00:48:53.435397 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-04 00:48:53.435407 | orchestrator | Thursday 04 September 2025 00:47:18 +0000 (0:00:00.670) 0:02:21.778 **** 2025-09-04 00:48:53.435416 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435425 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435435 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435444 | orchestrator | 2025-09-04 00:48:53.435453 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-04 00:48:53.435463 | orchestrator | Thursday 04 September 2025 00:47:19 +0000 (0:00:00.688) 0:02:22.467 **** 2025-09-04 00:48:53.435496 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435505 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435515 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435524 | orchestrator | 2025-09-04 00:48:53.435534 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-04 00:48:53.435543 | orchestrator | Thursday 04 September 2025 00:47:20 +0000 (0:00:01.112) 0:02:23.579 **** 2025-09-04 00:48:53.435552 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:48:53.435562 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:48:53.435571 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:48:53.435581 | orchestrator | 2025-09-04 00:48:53.435590 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-04 00:48:53.435600 | orchestrator | Thursday 04 September 2025 00:47:21 +0000 (0:00:00.925) 0:02:24.505 **** 2025-09-04 00:48:53.435609 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.435618 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.435628 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.435637 | orchestrator | 2025-09-04 00:48:53.435647 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-04 00:48:53.435656 | orchestrator | Thursday 04 September 2025 00:47:21 +0000 (0:00:00.279) 0:02:24.784 **** 2025-09-04 00:48:53.435665 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.435675 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.435684 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.435694 | orchestrator | 2025-09-04 00:48:53.435703 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-04 00:48:53.435713 | orchestrator | Thursday 04 September 2025 00:47:21 +0000 (0:00:00.398) 0:02:25.183 **** 2025-09-04 00:48:53.435722 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435731 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435741 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435750 | orchestrator | 2025-09-04 00:48:53.435760 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-04 00:48:53.435769 | orchestrator | Thursday 04 September 2025 00:47:22 +0000 (0:00:00.978) 0:02:26.162 **** 2025-09-04 00:48:53.435779 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.435788 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.435797 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.435807 | orchestrator | 2025-09-04 00:48:53.435816 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-04 00:48:53.435826 | orchestrator | Thursday 04 September 2025 00:47:23 +0000 (0:00:00.743) 0:02:26.906 **** 2025-09-04 00:48:53.435843 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-04 00:48:53.435852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-04 00:48:53.435862 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-04 00:48:53.435872 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-04 00:48:53.436624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-04 00:48:53.436655 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-04 00:48:53.436666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-04 00:48:53.436676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-04 00:48:53.436689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-04 00:48:53.436711 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-04 00:48:53.436722 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-04 00:48:53.436733 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-04 00:48:53.436744 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-04 00:48:53.436756 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-04 00:48:53.436767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-04 00:48:53.436778 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-04 00:48:53.436790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-04 00:48:53.436801 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-04 00:48:53.436812 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-04 00:48:53.436823 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-04 00:48:53.436834 | orchestrator | 2025-09-04 00:48:53.436845 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-04 00:48:53.436856 | orchestrator | 2025-09-04 00:48:53.436868 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-04 00:48:53.436879 | orchestrator | Thursday 04 September 2025 00:47:26 +0000 (0:00:03.161) 0:02:30.068 **** 2025-09-04 00:48:53.436890 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.436901 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.436912 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.436923 | orchestrator | 2025-09-04 00:48:53.436934 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-04 00:48:53.436945 | orchestrator | Thursday 04 September 2025 00:47:27 +0000 (0:00:00.486) 0:02:30.554 **** 2025-09-04 00:48:53.436956 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.436968 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.436978 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.436989 | orchestrator | 2025-09-04 00:48:53.437000 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-04 00:48:53.437012 | orchestrator | Thursday 04 September 2025 00:47:27 +0000 (0:00:00.622) 0:02:31.177 **** 2025-09-04 00:48:53.437022 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.437033 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.437044 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.437055 | orchestrator | 2025-09-04 00:48:53.437064 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-04 00:48:53.437085 | orchestrator | Thursday 04 September 2025 00:47:28 +0000 (0:00:00.313) 0:02:31.491 **** 2025-09-04 00:48:53.437094 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:48:53.437104 | orchestrator | 2025-09-04 00:48:53.437113 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-04 00:48:53.437123 | orchestrator | Thursday 04 September 2025 00:47:28 +0000 (0:00:00.661) 0:02:32.152 **** 2025-09-04 00:48:53.437132 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.437142 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.437156 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.437166 | orchestrator | 2025-09-04 00:48:53.437176 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-04 00:48:53.437185 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:00.320) 0:02:32.472 **** 2025-09-04 00:48:53.437194 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.437204 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.437213 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.437223 | orchestrator | 2025-09-04 00:48:53.437232 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-04 00:48:53.437241 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:00.292) 0:02:32.765 **** 2025-09-04 00:48:53.437251 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.437260 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.437269 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.437279 | orchestrator | 2025-09-04 00:48:53.437288 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-04 00:48:53.437298 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:00.323) 0:02:33.089 **** 2025-09-04 00:48:53.437307 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.437316 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.437326 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.437335 | orchestrator | 2025-09-04 00:48:53.437345 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-04 00:48:53.437354 | orchestrator | Thursday 04 September 2025 00:47:30 +0000 (0:00:00.983) 0:02:34.072 **** 2025-09-04 00:48:53.437363 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.437373 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.437382 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.437392 | orchestrator | 2025-09-04 00:48:53.437401 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-04 00:48:53.437410 | orchestrator | Thursday 04 September 2025 00:47:31 +0000 (0:00:01.117) 0:02:35.190 **** 2025-09-04 00:48:53.437420 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.437429 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.437438 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.437448 | orchestrator | 2025-09-04 00:48:53.437457 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-04 00:48:53.437483 | orchestrator | Thursday 04 September 2025 00:47:33 +0000 (0:00:01.347) 0:02:36.537 **** 2025-09-04 00:48:53.437493 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:48:53.437502 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:48:53.437512 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:48:53.437521 | orchestrator | 2025-09-04 00:48:53.437536 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-04 00:48:53.437546 | orchestrator | 2025-09-04 00:48:53.437556 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-04 00:48:53.437565 | orchestrator | Thursday 04 September 2025 00:47:45 +0000 (0:00:12.064) 0:02:48.602 **** 2025-09-04 00:48:53.437575 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.437584 | orchestrator | 2025-09-04 00:48:53.437593 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-04 00:48:53.437603 | orchestrator | Thursday 04 September 2025 00:47:46 +0000 (0:00:00.864) 0:02:49.467 **** 2025-09-04 00:48:53.437618 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.437628 | orchestrator | 2025-09-04 00:48:53.437637 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-04 00:48:53.437647 | orchestrator | Thursday 04 September 2025 00:47:46 +0000 (0:00:00.488) 0:02:49.956 **** 2025-09-04 00:48:53.437656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-04 00:48:53.437666 | orchestrator | 2025-09-04 00:48:53.437675 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-04 00:48:53.437685 | orchestrator | Thursday 04 September 2025 00:47:47 +0000 (0:00:00.648) 0:02:50.604 **** 2025-09-04 00:48:53.437694 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.437704 | orchestrator | 2025-09-04 00:48:53.437713 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-04 00:48:53.437722 | orchestrator | Thursday 04 September 2025 00:47:48 +0000 (0:00:01.038) 0:02:51.642 **** 2025-09-04 00:48:53.437732 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.437741 | orchestrator | 2025-09-04 00:48:53.437751 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-04 00:48:53.437760 | orchestrator | Thursday 04 September 2025 00:47:49 +0000 (0:00:00.682) 0:02:52.325 **** 2025-09-04 00:48:53.437770 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-04 00:48:53.437779 | orchestrator | 2025-09-04 00:48:53.437788 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-04 00:48:53.437798 | orchestrator | Thursday 04 September 2025 00:47:50 +0000 (0:00:01.860) 0:02:54.185 **** 2025-09-04 00:48:53.437807 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-04 00:48:53.437817 | orchestrator | 2025-09-04 00:48:53.437826 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-04 00:48:53.437836 | orchestrator | Thursday 04 September 2025 00:47:52 +0000 (0:00:01.052) 0:02:55.237 **** 2025-09-04 00:48:53.437845 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.437855 | orchestrator | 2025-09-04 00:48:53.437864 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-04 00:48:53.437873 | orchestrator | Thursday 04 September 2025 00:47:52 +0000 (0:00:00.466) 0:02:55.704 **** 2025-09-04 00:48:53.437883 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.437892 | orchestrator | 2025-09-04 00:48:53.437902 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-04 00:48:53.437911 | orchestrator | 2025-09-04 00:48:53.437921 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-04 00:48:53.437931 | orchestrator | Thursday 04 September 2025 00:47:53 +0000 (0:00:00.738) 0:02:56.443 **** 2025-09-04 00:48:53.437940 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.437949 | orchestrator | 2025-09-04 00:48:53.437959 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-04 00:48:53.437968 | orchestrator | Thursday 04 September 2025 00:47:53 +0000 (0:00:00.178) 0:02:56.621 **** 2025-09-04 00:48:53.437982 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:48:53.437992 | orchestrator | 2025-09-04 00:48:53.438002 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-04 00:48:53.438011 | orchestrator | Thursday 04 September 2025 00:47:53 +0000 (0:00:00.208) 0:02:56.830 **** 2025-09-04 00:48:53.438078 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.438089 | orchestrator | 2025-09-04 00:48:53.438098 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-04 00:48:53.438108 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:00.889) 0:02:57.720 **** 2025-09-04 00:48:53.438117 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.438126 | orchestrator | 2025-09-04 00:48:53.438136 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-04 00:48:53.438145 | orchestrator | Thursday 04 September 2025 00:47:55 +0000 (0:00:01.367) 0:02:59.087 **** 2025-09-04 00:48:53.438155 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.438175 | orchestrator | 2025-09-04 00:48:53.438185 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-04 00:48:53.438195 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:01.727) 0:03:00.815 **** 2025-09-04 00:48:53.438204 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.438213 | orchestrator | 2025-09-04 00:48:53.438223 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-04 00:48:53.438232 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:00.397) 0:03:01.212 **** 2025-09-04 00:48:53.438242 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.438251 | orchestrator | 2025-09-04 00:48:53.438261 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-04 00:48:53.438270 | orchestrator | Thursday 04 September 2025 00:48:04 +0000 (0:00:06.750) 0:03:07.962 **** 2025-09-04 00:48:53.438280 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.438297 | orchestrator | 2025-09-04 00:48:53.438314 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-04 00:48:53.438584 | orchestrator | Thursday 04 September 2025 00:48:19 +0000 (0:00:14.655) 0:03:22.618 **** 2025-09-04 00:48:53.438622 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.438631 | orchestrator | 2025-09-04 00:48:53.438641 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-04 00:48:53.438650 | orchestrator | 2025-09-04 00:48:53.438660 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-04 00:48:53.438683 | orchestrator | Thursday 04 September 2025 00:48:19 +0000 (0:00:00.544) 0:03:23.162 **** 2025-09-04 00:48:53.438693 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.438703 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.438712 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.438722 | orchestrator | 2025-09-04 00:48:53.438731 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-04 00:48:53.438742 | orchestrator | Thursday 04 September 2025 00:48:20 +0000 (0:00:00.397) 0:03:23.559 **** 2025-09-04 00:48:53.438758 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.438773 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.438800 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.438816 | orchestrator | 2025-09-04 00:48:53.438831 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-04 00:48:53.438846 | orchestrator | Thursday 04 September 2025 00:48:20 +0000 (0:00:00.302) 0:03:23.862 **** 2025-09-04 00:48:53.438863 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:48:53.438880 | orchestrator | 2025-09-04 00:48:53.438895 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-04 00:48:53.438910 | orchestrator | Thursday 04 September 2025 00:48:21 +0000 (0:00:00.818) 0:03:24.680 **** 2025-09-04 00:48:53.438926 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.438936 | orchestrator | 2025-09-04 00:48:53.438946 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-04 00:48:53.438955 | orchestrator | Thursday 04 September 2025 00:48:21 +0000 (0:00:00.262) 0:03:24.943 **** 2025-09-04 00:48:53.438967 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.438984 | orchestrator | 2025-09-04 00:48:53.438993 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-04 00:48:53.439003 | orchestrator | Thursday 04 September 2025 00:48:21 +0000 (0:00:00.277) 0:03:25.220 **** 2025-09-04 00:48:53.439012 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439022 | orchestrator | 2025-09-04 00:48:53.439031 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-04 00:48:53.439040 | orchestrator | Thursday 04 September 2025 00:48:22 +0000 (0:00:00.586) 0:03:25.806 **** 2025-09-04 00:48:53.439047 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439054 | orchestrator | 2025-09-04 00:48:53.439060 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-04 00:48:53.439076 | orchestrator | Thursday 04 September 2025 00:48:22 +0000 (0:00:00.204) 0:03:26.011 **** 2025-09-04 00:48:53.439083 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439091 | orchestrator | 2025-09-04 00:48:53.439098 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-04 00:48:53.439106 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:00.239) 0:03:26.251 **** 2025-09-04 00:48:53.439113 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439121 | orchestrator | 2025-09-04 00:48:53.439128 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-04 00:48:53.439136 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:00.218) 0:03:26.470 **** 2025-09-04 00:48:53.439143 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439150 | orchestrator | 2025-09-04 00:48:53.439158 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-04 00:48:53.439165 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:00.232) 0:03:26.703 **** 2025-09-04 00:48:53.439173 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439181 | orchestrator | 2025-09-04 00:48:53.439188 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-04 00:48:53.439201 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:00.204) 0:03:26.907 **** 2025-09-04 00:48:53.439209 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439216 | orchestrator | 2025-09-04 00:48:53.439223 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-04 00:48:53.439231 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:00.199) 0:03:27.106 **** 2025-09-04 00:48:53.439239 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-04 00:48:53.439247 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-04 00:48:53.439254 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439262 | orchestrator | 2025-09-04 00:48:53.439269 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-04 00:48:53.439277 | orchestrator | Thursday 04 September 2025 00:48:24 +0000 (0:00:00.916) 0:03:28.023 **** 2025-09-04 00:48:53.439284 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439292 | orchestrator | 2025-09-04 00:48:53.439299 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-04 00:48:53.439307 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.214) 0:03:28.237 **** 2025-09-04 00:48:53.439314 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439322 | orchestrator | 2025-09-04 00:48:53.439330 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-04 00:48:53.439337 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.236) 0:03:28.474 **** 2025-09-04 00:48:53.439345 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439353 | orchestrator | 2025-09-04 00:48:53.439361 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-04 00:48:53.439368 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.267) 0:03:28.742 **** 2025-09-04 00:48:53.439376 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439384 | orchestrator | 2025-09-04 00:48:53.439391 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-04 00:48:53.439399 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.236) 0:03:28.979 **** 2025-09-04 00:48:53.439407 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439415 | orchestrator | 2025-09-04 00:48:53.439423 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-04 00:48:53.439431 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.195) 0:03:29.174 **** 2025-09-04 00:48:53.439439 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439446 | orchestrator | 2025-09-04 00:48:53.439453 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-04 00:48:53.439487 | orchestrator | Thursday 04 September 2025 00:48:26 +0000 (0:00:00.206) 0:03:29.381 **** 2025-09-04 00:48:53.439501 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439508 | orchestrator | 2025-09-04 00:48:53.439515 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-04 00:48:53.439521 | orchestrator | Thursday 04 September 2025 00:48:26 +0000 (0:00:00.213) 0:03:29.595 **** 2025-09-04 00:48:53.439527 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439534 | orchestrator | 2025-09-04 00:48:53.439540 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-04 00:48:53.439547 | orchestrator | Thursday 04 September 2025 00:48:26 +0000 (0:00:00.227) 0:03:29.822 **** 2025-09-04 00:48:53.439553 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439560 | orchestrator | 2025-09-04 00:48:53.439566 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-04 00:48:53.439573 | orchestrator | Thursday 04 September 2025 00:48:26 +0000 (0:00:00.274) 0:03:30.097 **** 2025-09-04 00:48:53.439579 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439586 | orchestrator | 2025-09-04 00:48:53.439592 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-04 00:48:53.439598 | orchestrator | Thursday 04 September 2025 00:48:27 +0000 (0:00:00.226) 0:03:30.324 **** 2025-09-04 00:48:53.439605 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439611 | orchestrator | 2025-09-04 00:48:53.439618 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-04 00:48:53.439625 | orchestrator | Thursday 04 September 2025 00:48:27 +0000 (0:00:00.202) 0:03:30.526 **** 2025-09-04 00:48:53.439631 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-04 00:48:53.439638 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-04 00:48:53.439645 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-04 00:48:53.439651 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-04 00:48:53.439658 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439664 | orchestrator | 2025-09-04 00:48:53.439671 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-04 00:48:53.439677 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:01.079) 0:03:31.606 **** 2025-09-04 00:48:53.439684 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439690 | orchestrator | 2025-09-04 00:48:53.439697 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-04 00:48:53.439703 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:00.226) 0:03:31.832 **** 2025-09-04 00:48:53.439710 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439716 | orchestrator | 2025-09-04 00:48:53.439723 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-04 00:48:53.439729 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:00.249) 0:03:32.081 **** 2025-09-04 00:48:53.439738 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439749 | orchestrator | 2025-09-04 00:48:53.439760 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-04 00:48:53.439770 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:00.254) 0:03:32.336 **** 2025-09-04 00:48:53.439782 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439793 | orchestrator | 2025-09-04 00:48:53.439804 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-04 00:48:53.439813 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:00.193) 0:03:32.529 **** 2025-09-04 00:48:53.439824 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-04 00:48:53.439830 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-04 00:48:53.439837 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439843 | orchestrator | 2025-09-04 00:48:53.439850 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-04 00:48:53.439862 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:00.307) 0:03:32.837 **** 2025-09-04 00:48:53.439868 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.439875 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.439881 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.439888 | orchestrator | 2025-09-04 00:48:53.439894 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-04 00:48:53.439901 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:00.336) 0:03:33.173 **** 2025-09-04 00:48:53.439907 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.439914 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.439920 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.439927 | orchestrator | 2025-09-04 00:48:53.439933 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-04 00:48:53.439940 | orchestrator | 2025-09-04 00:48:53.439946 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-04 00:48:53.439953 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:01.272) 0:03:34.446 **** 2025-09-04 00:48:53.439959 | orchestrator | ok: [testbed-manager] 2025-09-04 00:48:53.439966 | orchestrator | 2025-09-04 00:48:53.439972 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-04 00:48:53.439979 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.157) 0:03:34.603 **** 2025-09-04 00:48:53.439986 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-04 00:48:53.439992 | orchestrator | 2025-09-04 00:48:53.439999 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-04 00:48:53.440005 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.292) 0:03:34.896 **** 2025-09-04 00:48:53.440012 | orchestrator | changed: [testbed-manager] 2025-09-04 00:48:53.440018 | orchestrator | 2025-09-04 00:48:53.440025 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-04 00:48:53.440031 | orchestrator | 2025-09-04 00:48:53.440038 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-04 00:48:53.440049 | orchestrator | Thursday 04 September 2025 00:48:37 +0000 (0:00:05.608) 0:03:40.504 **** 2025-09-04 00:48:53.440056 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:48:53.440062 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:48:53.440069 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:48:53.440075 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:48:53.440082 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:48:53.440088 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:48:53.440095 | orchestrator | 2025-09-04 00:48:53.440101 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-04 00:48:53.440108 | orchestrator | Thursday 04 September 2025 00:48:38 +0000 (0:00:00.873) 0:03:41.378 **** 2025-09-04 00:48:53.440115 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-04 00:48:53.440121 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-04 00:48:53.440128 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-04 00:48:53.440134 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-04 00:48:53.440141 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-04 00:48:53.440147 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-04 00:48:53.440154 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-04 00:48:53.440160 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-04 00:48:53.440167 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-04 00:48:53.440173 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-04 00:48:53.440184 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-04 00:48:53.440191 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-04 00:48:53.440197 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-04 00:48:53.440204 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-04 00:48:53.440210 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-04 00:48:53.440217 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-04 00:48:53.440223 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-04 00:48:53.440230 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-04 00:48:53.440236 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-04 00:48:53.440243 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-04 00:48:53.440249 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-04 00:48:53.440255 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-04 00:48:53.440265 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-04 00:48:53.440272 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-04 00:48:53.440278 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-04 00:48:53.440284 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-04 00:48:53.440291 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-04 00:48:53.440297 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-04 00:48:53.440304 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-04 00:48:53.440310 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-04 00:48:53.440317 | orchestrator | 2025-09-04 00:48:53.440323 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-04 00:48:53.440330 | orchestrator | Thursday 04 September 2025 00:48:49 +0000 (0:00:11.428) 0:03:52.806 **** 2025-09-04 00:48:53.440336 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.440343 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.440349 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.440356 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.440362 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.440369 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.440375 | orchestrator | 2025-09-04 00:48:53.440382 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-04 00:48:53.440388 | orchestrator | Thursday 04 September 2025 00:48:50 +0000 (0:00:00.602) 0:03:53.409 **** 2025-09-04 00:48:53.440395 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:48:53.440401 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:48:53.440407 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:48:53.440414 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:48:53.440420 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:48:53.440427 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:48:53.440433 | orchestrator | 2025-09-04 00:48:53.440440 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:48:53.440450 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:48:53.440458 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-04 00:48:53.440514 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-04 00:48:53.440523 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-04 00:48:53.440530 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-04 00:48:53.440536 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-04 00:48:53.440543 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-04 00:48:53.440549 | orchestrator | 2025-09-04 00:48:53.440556 | orchestrator | 2025-09-04 00:48:53.440562 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:48:53.440569 | orchestrator | Thursday 04 September 2025 00:48:50 +0000 (0:00:00.416) 0:03:53.826 **** 2025-09-04 00:48:53.440575 | orchestrator | =============================================================================== 2025-09-04 00:48:53.440581 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.80s 2025-09-04 00:48:53.440588 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.28s 2025-09-04 00:48:53.440595 | orchestrator | kubectl : Install required packages ------------------------------------ 14.66s 2025-09-04 00:48:53.440601 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.06s 2025-09-04 00:48:53.440608 | orchestrator | Manage labels ---------------------------------------------------------- 11.43s 2025-09-04 00:48:53.440614 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.75s 2025-09-04 00:48:53.440621 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.09s 2025-09-04 00:48:53.440627 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.61s 2025-09-04 00:48:53.440634 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.52s 2025-09-04 00:48:53.440640 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.22s 2025-09-04 00:48:53.440647 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.16s 2025-09-04 00:48:53.440653 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.10s 2025-09-04 00:48:53.440660 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.65s 2025-09-04 00:48:53.440670 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.53s 2025-09-04 00:48:53.440676 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.91s 2025-09-04 00:48:53.440683 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.86s 2025-09-04 00:48:53.440689 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.79s 2025-09-04 00:48:53.440696 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.77s 2025-09-04 00:48:53.440702 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.76s 2025-09-04 00:48:53.440709 | orchestrator | k3s_server : Deploy vip manifest ---------------------------------------- 1.76s 2025-09-04 00:48:53.440715 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:53.440722 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task d0dbe15d-f1bd-4ade-8f34-310a399afccb is in state STARTED 2025-09-04 00:48:53.441799 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:48:53.442066 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:53.442960 | orchestrator | 2025-09-04 00:48:53 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:53.442988 | orchestrator | 2025-09-04 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:56.611822 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task ebb700e7-dd73-4339-a8f4-fce85957a545 is in state STARTED 2025-09-04 00:48:56.616091 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:56.616259 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task d0dbe15d-f1bd-4ade-8f34-310a399afccb is in state STARTED 2025-09-04 00:48:56.616276 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:48:56.616299 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:56.617054 | orchestrator | 2025-09-04 00:48:56 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:56.617079 | orchestrator | 2025-09-04 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:48:59.648695 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task ebb700e7-dd73-4339-a8f4-fce85957a545 is in state SUCCESS 2025-09-04 00:48:59.649055 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:48:59.649678 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task d0dbe15d-f1bd-4ade-8f34-310a399afccb is in state STARTED 2025-09-04 00:48:59.650332 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:48:59.651517 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:48:59.651541 | orchestrator | 2025-09-04 00:48:59 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:48:59.651553 | orchestrator | 2025-09-04 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:02.684139 | orchestrator | 2025-09-04 00:49:02 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:02.686152 | orchestrator | 2025-09-04 00:49:02 | INFO  | Task d0dbe15d-f1bd-4ade-8f34-310a399afccb is in state SUCCESS 2025-09-04 00:49:02.686925 | orchestrator | 2025-09-04 00:49:02 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:02.687689 | orchestrator | 2025-09-04 00:49:02 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:02.688324 | orchestrator | 2025-09-04 00:49:02 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:02.688345 | orchestrator | 2025-09-04 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:05.724499 | orchestrator | 2025-09-04 00:49:05 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:05.726257 | orchestrator | 2025-09-04 00:49:05 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:05.727045 | orchestrator | 2025-09-04 00:49:05 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:05.728021 | orchestrator | 2025-09-04 00:49:05 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:05.728063 | orchestrator | 2025-09-04 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:08.779408 | orchestrator | 2025-09-04 00:49:08 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:08.825859 | orchestrator | 2025-09-04 00:49:08 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:08.825946 | orchestrator | 2025-09-04 00:49:08 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:08.825960 | orchestrator | 2025-09-04 00:49:08 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:08.825973 | orchestrator | 2025-09-04 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:11.830175 | orchestrator | 2025-09-04 00:49:11 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:11.833697 | orchestrator | 2025-09-04 00:49:11 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:11.836653 | orchestrator | 2025-09-04 00:49:11 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:11.839935 | orchestrator | 2025-09-04 00:49:11 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:11.840178 | orchestrator | 2025-09-04 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:14.885267 | orchestrator | 2025-09-04 00:49:14 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:14.885654 | orchestrator | 2025-09-04 00:49:14 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:14.887470 | orchestrator | 2025-09-04 00:49:14 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:14.888233 | orchestrator | 2025-09-04 00:49:14 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:14.888257 | orchestrator | 2025-09-04 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:17.932347 | orchestrator | 2025-09-04 00:49:17 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:17.933607 | orchestrator | 2025-09-04 00:49:17 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:17.934604 | orchestrator | 2025-09-04 00:49:17 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:17.935535 | orchestrator | 2025-09-04 00:49:17 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:17.935554 | orchestrator | 2025-09-04 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:21.072415 | orchestrator | 2025-09-04 00:49:21 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:21.073394 | orchestrator | 2025-09-04 00:49:21 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:21.075912 | orchestrator | 2025-09-04 00:49:21 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:21.076651 | orchestrator | 2025-09-04 00:49:21 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:21.076684 | orchestrator | 2025-09-04 00:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:24.116521 | orchestrator | 2025-09-04 00:49:24 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:24.119705 | orchestrator | 2025-09-04 00:49:24 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:24.123191 | orchestrator | 2025-09-04 00:49:24 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:24.126756 | orchestrator | 2025-09-04 00:49:24 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:24.127095 | orchestrator | 2025-09-04 00:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:27.172390 | orchestrator | 2025-09-04 00:49:27 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:27.173150 | orchestrator | 2025-09-04 00:49:27 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:27.174139 | orchestrator | 2025-09-04 00:49:27 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:27.175342 | orchestrator | 2025-09-04 00:49:27 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:27.175365 | orchestrator | 2025-09-04 00:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:30.208196 | orchestrator | 2025-09-04 00:49:30 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:30.208295 | orchestrator | 2025-09-04 00:49:30 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:30.209836 | orchestrator | 2025-09-04 00:49:30 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:30.211242 | orchestrator | 2025-09-04 00:49:30 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:30.211337 | orchestrator | 2025-09-04 00:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:33.253042 | orchestrator | 2025-09-04 00:49:33 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:33.255226 | orchestrator | 2025-09-04 00:49:33 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:33.257126 | orchestrator | 2025-09-04 00:49:33 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:33.259089 | orchestrator | 2025-09-04 00:49:33 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:33.259571 | orchestrator | 2025-09-04 00:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:36.309148 | orchestrator | 2025-09-04 00:49:36 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:36.310555 | orchestrator | 2025-09-04 00:49:36 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:36.314590 | orchestrator | 2025-09-04 00:49:36 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:36.320236 | orchestrator | 2025-09-04 00:49:36 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:36.320261 | orchestrator | 2025-09-04 00:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:39.376027 | orchestrator | 2025-09-04 00:49:39 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:39.378950 | orchestrator | 2025-09-04 00:49:39 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:39.383363 | orchestrator | 2025-09-04 00:49:39 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:39.385047 | orchestrator | 2025-09-04 00:49:39 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:39.385073 | orchestrator | 2025-09-04 00:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:42.423992 | orchestrator | 2025-09-04 00:49:42 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:42.426666 | orchestrator | 2025-09-04 00:49:42 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:42.428873 | orchestrator | 2025-09-04 00:49:42 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:42.431363 | orchestrator | 2025-09-04 00:49:42 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:42.431552 | orchestrator | 2025-09-04 00:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:45.475496 | orchestrator | 2025-09-04 00:49:45 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:45.477234 | orchestrator | 2025-09-04 00:49:45 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:45.479748 | orchestrator | 2025-09-04 00:49:45 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:45.481731 | orchestrator | 2025-09-04 00:49:45 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:45.481939 | orchestrator | 2025-09-04 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:48.519512 | orchestrator | 2025-09-04 00:49:48 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:48.519877 | orchestrator | 2025-09-04 00:49:48 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:48.523871 | orchestrator | 2025-09-04 00:49:48 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:48.526175 | orchestrator | 2025-09-04 00:49:48 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:48.526226 | orchestrator | 2025-09-04 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:51.576237 | orchestrator | 2025-09-04 00:49:51 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:51.578954 | orchestrator | 2025-09-04 00:49:51 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:51.581833 | orchestrator | 2025-09-04 00:49:51 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:51.584045 | orchestrator | 2025-09-04 00:49:51 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:51.584442 | orchestrator | 2025-09-04 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:54.625624 | orchestrator | 2025-09-04 00:49:54 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:54.627664 | orchestrator | 2025-09-04 00:49:54 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:54.629688 | orchestrator | 2025-09-04 00:49:54 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:54.631900 | orchestrator | 2025-09-04 00:49:54 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:54.631951 | orchestrator | 2025-09-04 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:49:57.678008 | orchestrator | 2025-09-04 00:49:57 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:49:57.679548 | orchestrator | 2025-09-04 00:49:57 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:49:57.680833 | orchestrator | 2025-09-04 00:49:57 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:49:57.682808 | orchestrator | 2025-09-04 00:49:57 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:49:57.683239 | orchestrator | 2025-09-04 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:00.723253 | orchestrator | 2025-09-04 00:50:00 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:00.724602 | orchestrator | 2025-09-04 00:50:00 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:00.725831 | orchestrator | 2025-09-04 00:50:00 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:00.729227 | orchestrator | 2025-09-04 00:50:00 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:00.729699 | orchestrator | 2025-09-04 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:03.767176 | orchestrator | 2025-09-04 00:50:03 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:03.767506 | orchestrator | 2025-09-04 00:50:03 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:03.768298 | orchestrator | 2025-09-04 00:50:03 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:03.769108 | orchestrator | 2025-09-04 00:50:03 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:03.769129 | orchestrator | 2025-09-04 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:06.799053 | orchestrator | 2025-09-04 00:50:06 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:06.800921 | orchestrator | 2025-09-04 00:50:06 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:06.802549 | orchestrator | 2025-09-04 00:50:06 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:06.803722 | orchestrator | 2025-09-04 00:50:06 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:06.803939 | orchestrator | 2025-09-04 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:09.841870 | orchestrator | 2025-09-04 00:50:09 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:09.842316 | orchestrator | 2025-09-04 00:50:09 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:09.843457 | orchestrator | 2025-09-04 00:50:09 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:09.844957 | orchestrator | 2025-09-04 00:50:09 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:09.844981 | orchestrator | 2025-09-04 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:12.881725 | orchestrator | 2025-09-04 00:50:12 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:12.881826 | orchestrator | 2025-09-04 00:50:12 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:12.881841 | orchestrator | 2025-09-04 00:50:12 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:12.881852 | orchestrator | 2025-09-04 00:50:12 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:12.881863 | orchestrator | 2025-09-04 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:15.933766 | orchestrator | 2025-09-04 00:50:15 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:15.936730 | orchestrator | 2025-09-04 00:50:15 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:15.939809 | orchestrator | 2025-09-04 00:50:15 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:15.941922 | orchestrator | 2025-09-04 00:50:15 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:15.941974 | orchestrator | 2025-09-04 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:18.979860 | orchestrator | 2025-09-04 00:50:18 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:18.980543 | orchestrator | 2025-09-04 00:50:18 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:18.983443 | orchestrator | 2025-09-04 00:50:18 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state STARTED 2025-09-04 00:50:18.984310 | orchestrator | 2025-09-04 00:50:18 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:18.984334 | orchestrator | 2025-09-04 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:22.034916 | orchestrator | 2025-09-04 00:50:22 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:22.038235 | orchestrator | 2025-09-04 00:50:22 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:22.039298 | orchestrator | 2025-09-04 00:50:22 | INFO  | Task 4e6c3fa0-9af6-47b0-8466-93f481239c34 is in state SUCCESS 2025-09-04 00:50:22.043967 | orchestrator | 2025-09-04 00:50:22.043998 | orchestrator | 2025-09-04 00:50:22.044009 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-04 00:50:22.044021 | orchestrator | 2025-09-04 00:50:22.044031 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-04 00:50:22.044042 | orchestrator | Thursday 04 September 2025 00:48:55 +0000 (0:00:00.159) 0:00:00.159 **** 2025-09-04 00:50:22.044052 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-04 00:50:22.044062 | orchestrator | 2025-09-04 00:50:22.044073 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-04 00:50:22.044083 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:00.699) 0:00:00.859 **** 2025-09-04 00:50:22.044093 | orchestrator | changed: [testbed-manager] 2025-09-04 00:50:22.044104 | orchestrator | 2025-09-04 00:50:22.044114 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-04 00:50:22.044124 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:01.230) 0:00:02.089 **** 2025-09-04 00:50:22.044134 | orchestrator | changed: [testbed-manager] 2025-09-04 00:50:22.044143 | orchestrator | 2025-09-04 00:50:22.044153 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:50:22.044164 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:50:22.044175 | orchestrator | 2025-09-04 00:50:22.044185 | orchestrator | 2025-09-04 00:50:22.044195 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:50:22.044205 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:00.390) 0:00:02.480 **** 2025-09-04 00:50:22.044215 | orchestrator | =============================================================================== 2025-09-04 00:50:22.044225 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2025-09-04 00:50:22.044235 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-09-04 00:50:22.044244 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-09-04 00:50:22.044254 | orchestrator | 2025-09-04 00:50:22.044264 | orchestrator | 2025-09-04 00:50:22.044274 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-04 00:50:22.044283 | orchestrator | 2025-09-04 00:50:22.044293 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-04 00:50:22.044304 | orchestrator | Thursday 04 September 2025 00:48:55 +0000 (0:00:00.249) 0:00:00.249 **** 2025-09-04 00:50:22.044314 | orchestrator | ok: [testbed-manager] 2025-09-04 00:50:22.044325 | orchestrator | 2025-09-04 00:50:22.044359 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-04 00:50:22.044417 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:00.599) 0:00:00.848 **** 2025-09-04 00:50:22.044427 | orchestrator | ok: [testbed-manager] 2025-09-04 00:50:22.044436 | orchestrator | 2025-09-04 00:50:22.044446 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-04 00:50:22.044455 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:00.603) 0:00:01.451 **** 2025-09-04 00:50:22.044465 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-04 00:50:22.044474 | orchestrator | 2025-09-04 00:50:22.044498 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-04 00:50:22.044508 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:00.623) 0:00:02.075 **** 2025-09-04 00:50:22.044517 | orchestrator | changed: [testbed-manager] 2025-09-04 00:50:22.044527 | orchestrator | 2025-09-04 00:50:22.044536 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-04 00:50:22.044545 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:01.104) 0:00:03.179 **** 2025-09-04 00:50:22.044555 | orchestrator | changed: [testbed-manager] 2025-09-04 00:50:22.044564 | orchestrator | 2025-09-04 00:50:22.044574 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-04 00:50:22.044583 | orchestrator | Thursday 04 September 2025 00:48:59 +0000 (0:00:00.902) 0:00:04.081 **** 2025-09-04 00:50:22.044595 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-04 00:50:22.044607 | orchestrator | 2025-09-04 00:50:22.044618 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-04 00:50:22.044630 | orchestrator | Thursday 04 September 2025 00:49:00 +0000 (0:00:01.346) 0:00:05.428 **** 2025-09-04 00:50:22.044641 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-04 00:50:22.044653 | orchestrator | 2025-09-04 00:50:22.044664 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-04 00:50:22.044676 | orchestrator | Thursday 04 September 2025 00:49:01 +0000 (0:00:00.660) 0:00:06.088 **** 2025-09-04 00:50:22.044687 | orchestrator | ok: [testbed-manager] 2025-09-04 00:50:22.044698 | orchestrator | 2025-09-04 00:50:22.044709 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-04 00:50:22.044721 | orchestrator | Thursday 04 September 2025 00:49:01 +0000 (0:00:00.369) 0:00:06.457 **** 2025-09-04 00:50:22.044732 | orchestrator | ok: [testbed-manager] 2025-09-04 00:50:22.044743 | orchestrator | 2025-09-04 00:50:22.044755 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:50:22.044766 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:50:22.044777 | orchestrator | 2025-09-04 00:50:22.044788 | orchestrator | 2025-09-04 00:50:22.044799 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:50:22.044810 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:00.306) 0:00:06.764 **** 2025-09-04 00:50:22.044822 | orchestrator | =============================================================================== 2025-09-04 00:50:22.044834 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.35s 2025-09-04 00:50:22.044845 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2025-09-04 00:50:22.044857 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.90s 2025-09-04 00:50:22.044879 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.66s 2025-09-04 00:50:22.044891 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.62s 2025-09-04 00:50:22.044903 | orchestrator | Create .kube directory -------------------------------------------------- 0.60s 2025-09-04 00:50:22.044914 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2025-09-04 00:50:22.044925 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-09-04 00:50:22.044947 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-09-04 00:50:22.044958 | orchestrator | 2025-09-04 00:50:22.044967 | orchestrator | 2025-09-04 00:50:22.044977 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-04 00:50:22.044987 | orchestrator | 2025-09-04 00:50:22.044996 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-04 00:50:22.045006 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:00.329) 0:00:00.329 **** 2025-09-04 00:50:22.045015 | orchestrator | ok: [localhost] => { 2025-09-04 00:50:22.045025 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-04 00:50:22.045035 | orchestrator | } 2025-09-04 00:50:22.045045 | orchestrator | 2025-09-04 00:50:22.045054 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-04 00:50:22.045064 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:00.074) 0:00:00.404 **** 2025-09-04 00:50:22.045074 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-04 00:50:22.045085 | orchestrator | ...ignoring 2025-09-04 00:50:22.045095 | orchestrator | 2025-09-04 00:50:22.045104 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-04 00:50:22.045114 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:03.226) 0:00:03.631 **** 2025-09-04 00:50:22.045123 | orchestrator | skipping: [localhost] 2025-09-04 00:50:22.045132 | orchestrator | 2025-09-04 00:50:22.045142 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-04 00:50:22.045151 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:00.144) 0:00:03.775 **** 2025-09-04 00:50:22.045161 | orchestrator | ok: [localhost] 2025-09-04 00:50:22.045170 | orchestrator | 2025-09-04 00:50:22.045179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:50:22.045189 | orchestrator | 2025-09-04 00:50:22.045198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:50:22.045208 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:00.284) 0:00:04.060 **** 2025-09-04 00:50:22.045217 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:50:22.045227 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:50:22.045236 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:50:22.045246 | orchestrator | 2025-09-04 00:50:22.045255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:50:22.045264 | orchestrator | Thursday 04 September 2025 00:47:58 +0000 (0:00:00.703) 0:00:04.763 **** 2025-09-04 00:50:22.045274 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-04 00:50:22.045298 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-04 00:50:22.045317 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-04 00:50:22.045327 | orchestrator | 2025-09-04 00:50:22.045337 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-04 00:50:22.045347 | orchestrator | 2025-09-04 00:50:22.045356 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-04 00:50:22.045392 | orchestrator | Thursday 04 September 2025 00:48:00 +0000 (0:00:01.712) 0:00:06.476 **** 2025-09-04 00:50:22.045403 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:50:22.045412 | orchestrator | 2025-09-04 00:50:22.045422 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-04 00:50:22.045431 | orchestrator | Thursday 04 September 2025 00:48:01 +0000 (0:00:01.298) 0:00:07.775 **** 2025-09-04 00:50:22.045441 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:50:22.045451 | orchestrator | 2025-09-04 00:50:22.045460 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-04 00:50:22.045470 | orchestrator | Thursday 04 September 2025 00:48:02 +0000 (0:00:01.055) 0:00:08.830 **** 2025-09-04 00:50:22.045486 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045496 | orchestrator | 2025-09-04 00:50:22.045506 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-04 00:50:22.045515 | orchestrator | Thursday 04 September 2025 00:48:03 +0000 (0:00:00.435) 0:00:09.265 **** 2025-09-04 00:50:22.045525 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045534 | orchestrator | 2025-09-04 00:50:22.045544 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-04 00:50:22.045553 | orchestrator | Thursday 04 September 2025 00:48:03 +0000 (0:00:00.428) 0:00:09.693 **** 2025-09-04 00:50:22.045563 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045572 | orchestrator | 2025-09-04 00:50:22.045582 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-04 00:50:22.045591 | orchestrator | Thursday 04 September 2025 00:48:03 +0000 (0:00:00.360) 0:00:10.053 **** 2025-09-04 00:50:22.045601 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045610 | orchestrator | 2025-09-04 00:50:22.045620 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-04 00:50:22.045629 | orchestrator | Thursday 04 September 2025 00:48:04 +0000 (0:00:00.398) 0:00:10.452 **** 2025-09-04 00:50:22.045639 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:50:22.045649 | orchestrator | 2025-09-04 00:50:22.045658 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-04 00:50:22.045673 | orchestrator | Thursday 04 September 2025 00:48:05 +0000 (0:00:01.213) 0:00:11.666 **** 2025-09-04 00:50:22.045683 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:50:22.045692 | orchestrator | 2025-09-04 00:50:22.045702 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-04 00:50:22.045711 | orchestrator | Thursday 04 September 2025 00:48:06 +0000 (0:00:00.922) 0:00:12.589 **** 2025-09-04 00:50:22.045721 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045730 | orchestrator | 2025-09-04 00:50:22.045740 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-04 00:50:22.045750 | orchestrator | Thursday 04 September 2025 00:48:06 +0000 (0:00:00.447) 0:00:13.037 **** 2025-09-04 00:50:22.045759 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.045769 | orchestrator | 2025-09-04 00:50:22.045778 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-04 00:50:22.045788 | orchestrator | Thursday 04 September 2025 00:48:07 +0000 (0:00:00.494) 0:00:13.531 **** 2025-09-04 00:50:22.045802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.045888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.045917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.045928 | orchestrator | 2025-09-04 00:50:22.045938 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-04 00:50:22.045948 | orchestrator | Thursday 04 September 2025 00:48:09 +0000 (0:00:01.758) 0:00:15.290 **** 2025-09-04 00:50:22.045968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.045979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.046000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.046011 | orchestrator | 2025-09-04 00:50:22.046082 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-04 00:50:22.046093 | orchestrator | Thursday 04 September 2025 00:48:12 +0000 (0:00:02.856) 0:00:18.146 **** 2025-09-04 00:50:22.046102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-04 00:50:22.046113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-04 00:50:22.046123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-04 00:50:22.046132 | orchestrator | 2025-09-04 00:50:22.046141 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-04 00:50:22.046151 | orchestrator | Thursday 04 September 2025 00:48:15 +0000 (0:00:03.267) 0:00:21.414 **** 2025-09-04 00:50:22.046161 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-04 00:50:22.046170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-04 00:50:22.046179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-04 00:50:22.046189 | orchestrator | 2025-09-04 00:50:22.046199 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-04 00:50:22.046214 | orchestrator | Thursday 04 September 2025 00:48:17 +0000 (0:00:02.035) 0:00:23.450 **** 2025-09-04 00:50:22.046224 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-04 00:50:22.046233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-04 00:50:22.046243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-04 00:50:22.046252 | orchestrator | 2025-09-04 00:50:22.046262 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-04 00:50:22.046271 | orchestrator | Thursday 04 September 2025 00:48:18 +0000 (0:00:01.654) 0:00:25.104 **** 2025-09-04 00:50:22.046280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-04 00:50:22.046290 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-04 00:50:22.046299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-04 00:50:22.046309 | orchestrator | 2025-09-04 00:50:22.046318 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-04 00:50:22.046328 | orchestrator | Thursday 04 September 2025 00:48:21 +0000 (0:00:02.235) 0:00:27.340 **** 2025-09-04 00:50:22.046337 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-04 00:50:22.046354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-04 00:50:22.046381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-04 00:50:22.046391 | orchestrator | 2025-09-04 00:50:22.046400 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-04 00:50:22.046410 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:02.372) 0:00:29.712 **** 2025-09-04 00:50:22.046419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-04 00:50:22.046429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-04 00:50:22.046438 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-04 00:50:22.046448 | orchestrator | 2025-09-04 00:50:22.046457 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-04 00:50:22.046467 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:01.652) 0:00:31.364 **** 2025-09-04 00:50:22.046476 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.046486 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:50:22.046496 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:50:22.046505 | orchestrator | 2025-09-04 00:50:22.046515 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-04 00:50:22.046524 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.727) 0:00:32.092 **** 2025-09-04 00:50:22.046540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.046557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.046569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:50:22.046586 | orchestrator | 2025-09-04 00:50:22.046596 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-04 00:50:22.046605 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:02.137) 0:00:34.229 **** 2025-09-04 00:50:22.046615 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:50:22.046624 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:50:22.046634 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:50:22.046644 | orchestrator | 2025-09-04 00:50:22.046653 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-04 00:50:22.046662 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:00.848) 0:00:35.078 **** 2025-09-04 00:50:22.046672 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:50:22.046681 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:50:22.046691 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:50:22.046700 | orchestrator | 2025-09-04 00:50:22.046710 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-04 00:50:22.046719 | orchestrator | Thursday 04 September 2025 00:48:37 +0000 (0:00:08.478) 0:00:43.557 **** 2025-09-04 00:50:22.046729 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:50:22.046738 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:50:22.046748 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:50:22.046757 | orchestrator | 2025-09-04 00:50:22.046767 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-04 00:50:22.046776 | orchestrator | 2025-09-04 00:50:22.046785 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-04 00:50:22.046799 | orchestrator | Thursday 04 September 2025 00:48:39 +0000 (0:00:01.575) 0:00:45.132 **** 2025-09-04 00:50:22.046809 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:50:22.046818 | orchestrator | 2025-09-04 00:50:22.046828 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-04 00:50:22.046837 | orchestrator | Thursday 04 September 2025 00:48:39 +0000 (0:00:00.596) 0:00:45.728 **** 2025-09-04 00:50:22.046846 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:50:22.046856 | orchestrator | 2025-09-04 00:50:22.046865 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-04 00:50:22.046875 | orchestrator | Thursday 04 September 2025 00:48:40 +0000 (0:00:00.783) 0:00:46.512 **** 2025-09-04 00:50:22.046884 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:50:22.046893 | orchestrator | 2025-09-04 00:50:22.046903 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-04 00:50:22.046913 | orchestrator | Thursday 04 September 2025 00:48:42 +0000 (0:00:01.996) 0:00:48.509 **** 2025-09-04 00:50:22.046922 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:50:22.046931 | orchestrator | 2025-09-04 00:50:22.046941 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-04 00:50:22.046950 | orchestrator | 2025-09-04 00:50:22.046959 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-04 00:50:22.046969 | orchestrator | Thursday 04 September 2025 00:49:38 +0000 (0:00:56.148) 0:01:44.658 **** 2025-09-04 00:50:22.046978 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:50:22.046987 | orchestrator | 2025-09-04 00:50:22.046997 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-04 00:50:22.047012 | orchestrator | Thursday 04 September 2025 00:49:39 +0000 (0:00:00.589) 0:01:45.248 **** 2025-09-04 00:50:22.047022 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:50:22.047031 | orchestrator | 2025-09-04 00:50:22.047040 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-04 00:50:22.047050 | orchestrator | Thursday 04 September 2025 00:49:39 +0000 (0:00:00.239) 0:01:45.487 **** 2025-09-04 00:50:22.047059 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:50:22.047068 | orchestrator | 2025-09-04 00:50:22.047078 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-04 00:50:22.047087 | orchestrator | Thursday 04 September 2025 00:49:41 +0000 (0:00:01.920) 0:01:47.408 **** 2025-09-04 00:50:22.047097 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:50:22.047106 | orchestrator | 2025-09-04 00:50:22.047115 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-04 00:50:22.047125 | orchestrator | 2025-09-04 00:50:22.047134 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-04 00:50:22.047143 | orchestrator | Thursday 04 September 2025 00:49:59 +0000 (0:00:18.284) 0:02:05.692 **** 2025-09-04 00:50:22.047152 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:50:22.047162 | orchestrator | 2025-09-04 00:50:22.047176 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-04 00:50:22.047185 | orchestrator | Thursday 04 September 2025 00:50:00 +0000 (0:00:00.588) 0:02:06.281 **** 2025-09-04 00:50:22.047195 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:50:22.047204 | orchestrator | 2025-09-04 00:50:22.047214 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-04 00:50:22.047223 | orchestrator | Thursday 04 September 2025 00:50:00 +0000 (0:00:00.251) 0:02:06.533 **** 2025-09-04 00:50:22.047233 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:50:22.047242 | orchestrator | 2025-09-04 00:50:22.047252 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-04 00:50:22.047261 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:06.673) 0:02:13.206 **** 2025-09-04 00:50:22.047270 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:50:22.047280 | orchestrator | 2025-09-04 00:50:22.047290 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-04 00:50:22.047299 | orchestrator | 2025-09-04 00:50:22.047308 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-04 00:50:22.047318 | orchestrator | Thursday 04 September 2025 00:50:18 +0000 (0:00:11.032) 0:02:24.239 **** 2025-09-04 00:50:22.047327 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:50:22.047337 | orchestrator | 2025-09-04 00:50:22.047346 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-04 00:50:22.047356 | orchestrator | Thursday 04 September 2025 00:50:18 +0000 (0:00:00.619) 0:02:24.858 **** 2025-09-04 00:50:22.047408 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-04 00:50:22.047419 | orchestrator | enable_outward_rabbitmq_True 2025-09-04 00:50:22.047429 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-04 00:50:22.047438 | orchestrator | outward_rabbitmq_restart 2025-09-04 00:50:22.047448 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:50:22.047457 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:50:22.047467 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:50:22.047476 | orchestrator | 2025-09-04 00:50:22.047486 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-04 00:50:22.047495 | orchestrator | skipping: no hosts matched 2025-09-04 00:50:22.047504 | orchestrator | 2025-09-04 00:50:22.047514 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-04 00:50:22.047523 | orchestrator | skipping: no hosts matched 2025-09-04 00:50:22.047533 | orchestrator | 2025-09-04 00:50:22.047542 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-04 00:50:22.047551 | orchestrator | skipping: no hosts matched 2025-09-04 00:50:22.047568 | orchestrator | 2025-09-04 00:50:22.047577 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:50:22.047587 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-04 00:50:22.047597 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-04 00:50:22.047611 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:50:22.047622 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 00:50:22.047631 | orchestrator | 2025-09-04 00:50:22.047641 | orchestrator | 2025-09-04 00:50:22.047650 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:50:22.047660 | orchestrator | Thursday 04 September 2025 00:50:21 +0000 (0:00:02.388) 0:02:27.247 **** 2025-09-04 00:50:22.047669 | orchestrator | =============================================================================== 2025-09-04 00:50:22.047679 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.47s 2025-09-04 00:50:22.047688 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.59s 2025-09-04 00:50:22.047697 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.48s 2025-09-04 00:50:22.047707 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.27s 2025-09-04 00:50:22.047717 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.23s 2025-09-04 00:50:22.047726 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.86s 2025-09-04 00:50:22.047735 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.39s 2025-09-04 00:50:22.047745 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.37s 2025-09-04 00:50:22.047754 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.24s 2025-09-04 00:50:22.047764 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.14s 2025-09-04 00:50:22.047774 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.04s 2025-09-04 00:50:22.047783 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.78s 2025-09-04 00:50:22.047793 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.76s 2025-09-04 00:50:22.047802 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.71s 2025-09-04 00:50:22.047812 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.65s 2025-09-04 00:50:22.047821 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.65s 2025-09-04 00:50:22.047830 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.58s 2025-09-04 00:50:22.047845 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2025-09-04 00:50:22.047855 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.28s 2025-09-04 00:50:22.047864 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.21s 2025-09-04 00:50:22.047874 | orchestrator | 2025-09-04 00:50:22 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:22.047883 | orchestrator | 2025-09-04 00:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:25.091989 | orchestrator | 2025-09-04 00:50:25 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:25.092610 | orchestrator | 2025-09-04 00:50:25 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:25.093997 | orchestrator | 2025-09-04 00:50:25 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:25.095862 | orchestrator | 2025-09-04 00:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:28.130248 | orchestrator | 2025-09-04 00:50:28 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:28.131502 | orchestrator | 2025-09-04 00:50:28 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:28.135412 | orchestrator | 2025-09-04 00:50:28 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:28.135441 | orchestrator | 2025-09-04 00:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:31.180792 | orchestrator | 2025-09-04 00:50:31 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:31.181811 | orchestrator | 2025-09-04 00:50:31 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:31.183992 | orchestrator | 2025-09-04 00:50:31 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:31.184101 | orchestrator | 2025-09-04 00:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:34.231715 | orchestrator | 2025-09-04 00:50:34 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:34.233851 | orchestrator | 2025-09-04 00:50:34 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:34.236411 | orchestrator | 2025-09-04 00:50:34 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:34.236792 | orchestrator | 2025-09-04 00:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:37.286859 | orchestrator | 2025-09-04 00:50:37 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:37.288034 | orchestrator | 2025-09-04 00:50:37 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:37.288962 | orchestrator | 2025-09-04 00:50:37 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:37.288988 | orchestrator | 2025-09-04 00:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:40.330512 | orchestrator | 2025-09-04 00:50:40 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:40.332549 | orchestrator | 2025-09-04 00:50:40 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:40.334356 | orchestrator | 2025-09-04 00:50:40 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:40.334396 | orchestrator | 2025-09-04 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:43.379192 | orchestrator | 2025-09-04 00:50:43 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:43.380326 | orchestrator | 2025-09-04 00:50:43 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:43.381793 | orchestrator | 2025-09-04 00:50:43 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:43.381826 | orchestrator | 2025-09-04 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:46.421802 | orchestrator | 2025-09-04 00:50:46 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:46.422547 | orchestrator | 2025-09-04 00:50:46 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:46.423326 | orchestrator | 2025-09-04 00:50:46 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:46.423416 | orchestrator | 2025-09-04 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:49.460929 | orchestrator | 2025-09-04 00:50:49 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:49.465099 | orchestrator | 2025-09-04 00:50:49 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:49.465889 | orchestrator | 2025-09-04 00:50:49 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:49.465917 | orchestrator | 2025-09-04 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:52.505476 | orchestrator | 2025-09-04 00:50:52 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:52.505566 | orchestrator | 2025-09-04 00:50:52 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:52.506466 | orchestrator | 2025-09-04 00:50:52 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:52.506489 | orchestrator | 2025-09-04 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:55.548238 | orchestrator | 2025-09-04 00:50:55 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:55.549217 | orchestrator | 2025-09-04 00:50:55 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:55.551043 | orchestrator | 2025-09-04 00:50:55 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:55.551077 | orchestrator | 2025-09-04 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:50:58.590404 | orchestrator | 2025-09-04 00:50:58 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:50:58.594474 | orchestrator | 2025-09-04 00:50:58 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:50:58.595193 | orchestrator | 2025-09-04 00:50:58 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:50:58.595219 | orchestrator | 2025-09-04 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:01.626839 | orchestrator | 2025-09-04 00:51:01 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:01.627538 | orchestrator | 2025-09-04 00:51:01 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:01.628537 | orchestrator | 2025-09-04 00:51:01 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:01.628579 | orchestrator | 2025-09-04 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:04.663154 | orchestrator | 2025-09-04 00:51:04 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:04.665085 | orchestrator | 2025-09-04 00:51:04 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:04.667460 | orchestrator | 2025-09-04 00:51:04 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:04.667589 | orchestrator | 2025-09-04 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:07.720064 | orchestrator | 2025-09-04 00:51:07 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:07.722148 | orchestrator | 2025-09-04 00:51:07 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:07.723812 | orchestrator | 2025-09-04 00:51:07 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:07.723996 | orchestrator | 2025-09-04 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:10.772048 | orchestrator | 2025-09-04 00:51:10 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:10.773525 | orchestrator | 2025-09-04 00:51:10 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:10.775362 | orchestrator | 2025-09-04 00:51:10 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:10.775393 | orchestrator | 2025-09-04 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:13.817830 | orchestrator | 2025-09-04 00:51:13 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:13.818725 | orchestrator | 2025-09-04 00:51:13 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:13.821414 | orchestrator | 2025-09-04 00:51:13 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:13.822089 | orchestrator | 2025-09-04 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:16.861112 | orchestrator | 2025-09-04 00:51:16 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:16.861615 | orchestrator | 2025-09-04 00:51:16 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:16.862590 | orchestrator | 2025-09-04 00:51:16 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:16.862631 | orchestrator | 2025-09-04 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:19.898180 | orchestrator | 2025-09-04 00:51:19 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:19.900042 | orchestrator | 2025-09-04 00:51:19 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state STARTED 2025-09-04 00:51:19.902880 | orchestrator | 2025-09-04 00:51:19 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:19.903150 | orchestrator | 2025-09-04 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:22.949580 | orchestrator | 2025-09-04 00:51:22 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:22.954399 | orchestrator | 2025-09-04 00:51:22 | INFO  | Task 6cae4469-5be9-49ac-8e6f-bad2b97552f9 is in state SUCCESS 2025-09-04 00:51:22.956046 | orchestrator | 2025-09-04 00:51:22.956337 | orchestrator | 2025-09-04 00:51:22.956376 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:51:22.956387 | orchestrator | 2025-09-04 00:51:22.956396 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:51:22.956405 | orchestrator | Thursday 04 September 2025 00:48:51 +0000 (0:00:00.250) 0:00:00.250 **** 2025-09-04 00:51:22.956414 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:51:22.956424 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:51:22.956432 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:51:22.956441 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.956450 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.956458 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.956467 | orchestrator | 2025-09-04 00:51:22.956475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:51:22.956484 | orchestrator | Thursday 04 September 2025 00:48:53 +0000 (0:00:01.026) 0:00:01.276 **** 2025-09-04 00:51:22.956492 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-04 00:51:22.956502 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-04 00:51:22.956511 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-04 00:51:22.956519 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-04 00:51:22.956528 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-04 00:51:22.956561 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-04 00:51:22.956570 | orchestrator | 2025-09-04 00:51:22.956578 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-04 00:51:22.956588 | orchestrator | 2025-09-04 00:51:22.956610 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-04 00:51:22.956619 | orchestrator | Thursday 04 September 2025 00:48:55 +0000 (0:00:02.391) 0:00:03.668 **** 2025-09-04 00:51:22.956629 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:51:22.956639 | orchestrator | 2025-09-04 00:51:22.956648 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-04 00:51:22.956657 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:01.721) 0:00:05.389 **** 2025-09-04 00:51:22.956668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956725 | orchestrator | 2025-09-04 00:51:22.956745 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-04 00:51:22.956754 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:01.650) 0:00:07.039 **** 2025-09-04 00:51:22.956763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956829 | orchestrator | 2025-09-04 00:51:22.956837 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-04 00:51:22.956846 | orchestrator | Thursday 04 September 2025 00:49:00 +0000 (0:00:02.108) 0:00:09.148 **** 2025-09-04 00:51:22.956855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.956928 | orchestrator | 2025-09-04 00:51:22.956937 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-04 00:51:22.956947 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:01.233) 0:00:10.382 **** 2025-09-04 00:51:22.956956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957107 | orchestrator | 2025-09-04 00:51:22.957122 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-04 00:51:22.957132 | orchestrator | Thursday 04 September 2025 00:49:03 +0000 (0:00:01.566) 0:00:11.948 **** 2025-09-04 00:51:22.957142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.957206 | orchestrator | 2025-09-04 00:51:22.957215 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-04 00:51:22.957225 | orchestrator | Thursday 04 September 2025 00:49:04 +0000 (0:00:01.126) 0:00:13.075 **** 2025-09-04 00:51:22.957235 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:51:22.957245 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:51:22.957254 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:51:22.957263 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.957271 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.957279 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.957287 | orchestrator | 2025-09-04 00:51:22.957296 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-04 00:51:22.957355 | orchestrator | Thursday 04 September 2025 00:49:07 +0000 (0:00:02.730) 0:00:15.805 **** 2025-09-04 00:51:22.957365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-04 00:51:22.957375 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-04 00:51:22.957383 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-04 00:51:22.957393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-04 00:51:22.957402 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-04 00:51:22.957411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-04 00:51:22.957420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957442 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957451 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957468 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-04 00:51:22.957476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957554 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957572 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-04 00:51:22.957593 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957602 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-04 00:51:22.957643 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957651 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957658 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957666 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957675 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-04 00:51:22.957697 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957705 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957713 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-04 00:51:22.957746 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-04 00:51:22.957755 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-04 00:51:22.957763 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-04 00:51:22.957771 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-04 00:51:22.957779 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-04 00:51:22.957788 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-04 00:51:22.957795 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-04 00:51:22.957804 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-04 00:51:22.957818 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-04 00:51:22.957826 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-04 00:51:22.957834 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-04 00:51:22.957842 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-04 00:51:22.957851 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-04 00:51:22.957859 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-04 00:51:22.957867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-04 00:51:22.957875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-04 00:51:22.957884 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-04 00:51:22.957895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-04 00:51:22.957904 | orchestrator | 2025-09-04 00:51:22.957912 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.957921 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:18.616) 0:00:34.422 **** 2025-09-04 00:51:22.957929 | orchestrator | 2025-09-04 00:51:22.957937 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.957951 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.249) 0:00:34.671 **** 2025-09-04 00:51:22.957959 | orchestrator | 2025-09-04 00:51:22.957967 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.957975 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.065) 0:00:34.737 **** 2025-09-04 00:51:22.957984 | orchestrator | 2025-09-04 00:51:22.957992 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.958000 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.068) 0:00:34.806 **** 2025-09-04 00:51:22.958008 | orchestrator | 2025-09-04 00:51:22.958055 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.958065 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.063) 0:00:34.870 **** 2025-09-04 00:51:22.958073 | orchestrator | 2025-09-04 00:51:22.958082 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-04 00:51:22.958090 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.067) 0:00:34.938 **** 2025-09-04 00:51:22.958098 | orchestrator | 2025-09-04 00:51:22.958106 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-04 00:51:22.958114 | orchestrator | Thursday 04 September 2025 00:49:26 +0000 (0:00:00.061) 0:00:34.999 **** 2025-09-04 00:51:22.958122 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:51:22.958131 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:51:22.958139 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:51:22.958147 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958155 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958163 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958172 | orchestrator | 2025-09-04 00:51:22.958181 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-04 00:51:22.958190 | orchestrator | Thursday 04 September 2025 00:49:28 +0000 (0:00:01.578) 0:00:36.578 **** 2025-09-04 00:51:22.958199 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.958209 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.958218 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:51:22.958227 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.958237 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:51:22.958246 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:51:22.958255 | orchestrator | 2025-09-04 00:51:22.958265 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-04 00:51:22.958275 | orchestrator | 2025-09-04 00:51:22.958284 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-04 00:51:22.958293 | orchestrator | Thursday 04 September 2025 00:50:03 +0000 (0:00:34.859) 0:01:11.437 **** 2025-09-04 00:51:22.958316 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:51:22.958326 | orchestrator | 2025-09-04 00:51:22.958335 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-04 00:51:22.958345 | orchestrator | Thursday 04 September 2025 00:50:04 +0000 (0:00:00.922) 0:01:12.360 **** 2025-09-04 00:51:22.958354 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:51:22.958364 | orchestrator | 2025-09-04 00:51:22.958373 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-04 00:51:22.958382 | orchestrator | Thursday 04 September 2025 00:50:04 +0000 (0:00:00.529) 0:01:12.890 **** 2025-09-04 00:51:22.958392 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958401 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958411 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958421 | orchestrator | 2025-09-04 00:51:22.958430 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-04 00:51:22.958440 | orchestrator | Thursday 04 September 2025 00:50:05 +0000 (0:00:00.975) 0:01:13.866 **** 2025-09-04 00:51:22.958449 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958463 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958474 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958487 | orchestrator | 2025-09-04 00:51:22.958497 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-04 00:51:22.958506 | orchestrator | Thursday 04 September 2025 00:50:05 +0000 (0:00:00.312) 0:01:14.178 **** 2025-09-04 00:51:22.958515 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958524 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958533 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958542 | orchestrator | 2025-09-04 00:51:22.958551 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-04 00:51:22.958559 | orchestrator | Thursday 04 September 2025 00:50:06 +0000 (0:00:00.344) 0:01:14.522 **** 2025-09-04 00:51:22.958566 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958573 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958581 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958588 | orchestrator | 2025-09-04 00:51:22.958596 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-04 00:51:22.958603 | orchestrator | Thursday 04 September 2025 00:50:06 +0000 (0:00:00.343) 0:01:14.866 **** 2025-09-04 00:51:22.958611 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.958619 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.958627 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.958634 | orchestrator | 2025-09-04 00:51:22.958642 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-04 00:51:22.958649 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:00.468) 0:01:15.334 **** 2025-09-04 00:51:22.958657 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958665 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958674 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958683 | orchestrator | 2025-09-04 00:51:22.958699 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-04 00:51:22.958708 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:00.305) 0:01:15.639 **** 2025-09-04 00:51:22.958716 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958724 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958732 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958740 | orchestrator | 2025-09-04 00:51:22.958749 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-04 00:51:22.958757 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:00.320) 0:01:15.960 **** 2025-09-04 00:51:22.958766 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958775 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958783 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958791 | orchestrator | 2025-09-04 00:51:22.958799 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-04 00:51:22.958807 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:00.271) 0:01:16.231 **** 2025-09-04 00:51:22.958815 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958823 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958832 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958840 | orchestrator | 2025-09-04 00:51:22.958849 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-04 00:51:22.958857 | orchestrator | Thursday 04 September 2025 00:50:08 +0000 (0:00:00.474) 0:01:16.705 **** 2025-09-04 00:51:22.958866 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958874 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958882 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958890 | orchestrator | 2025-09-04 00:51:22.958899 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-04 00:51:22.958907 | orchestrator | Thursday 04 September 2025 00:50:08 +0000 (0:00:00.288) 0:01:16.994 **** 2025-09-04 00:51:22.958915 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958924 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958932 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958946 | orchestrator | 2025-09-04 00:51:22.958954 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-04 00:51:22.958962 | orchestrator | Thursday 04 September 2025 00:50:09 +0000 (0:00:00.302) 0:01:17.296 **** 2025-09-04 00:51:22.958970 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.958979 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.958987 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.958996 | orchestrator | 2025-09-04 00:51:22.959004 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-04 00:51:22.959013 | orchestrator | Thursday 04 September 2025 00:50:09 +0000 (0:00:00.285) 0:01:17.581 **** 2025-09-04 00:51:22.959021 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959029 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959037 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959046 | orchestrator | 2025-09-04 00:51:22.959054 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-04 00:51:22.959061 | orchestrator | Thursday 04 September 2025 00:50:09 +0000 (0:00:00.305) 0:01:17.886 **** 2025-09-04 00:51:22.959069 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959078 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959086 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959094 | orchestrator | 2025-09-04 00:51:22.959102 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-04 00:51:22.959110 | orchestrator | Thursday 04 September 2025 00:50:10 +0000 (0:00:00.485) 0:01:18.371 **** 2025-09-04 00:51:22.959118 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959126 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959135 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959143 | orchestrator | 2025-09-04 00:51:22.959151 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-04 00:51:22.959160 | orchestrator | Thursday 04 September 2025 00:50:10 +0000 (0:00:00.314) 0:01:18.686 **** 2025-09-04 00:51:22.959168 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959176 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959184 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959193 | orchestrator | 2025-09-04 00:51:22.959201 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-04 00:51:22.959209 | orchestrator | Thursday 04 September 2025 00:50:10 +0000 (0:00:00.305) 0:01:18.991 **** 2025-09-04 00:51:22.959218 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959226 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959238 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959246 | orchestrator | 2025-09-04 00:51:22.959254 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-04 00:51:22.959262 | orchestrator | Thursday 04 September 2025 00:50:11 +0000 (0:00:00.329) 0:01:19.321 **** 2025-09-04 00:51:22.959271 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:51:22.959279 | orchestrator | 2025-09-04 00:51:22.959288 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-04 00:51:22.959296 | orchestrator | Thursday 04 September 2025 00:50:11 +0000 (0:00:00.779) 0:01:20.100 **** 2025-09-04 00:51:22.959320 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.959329 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.959337 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.959344 | orchestrator | 2025-09-04 00:51:22.959353 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-04 00:51:22.959361 | orchestrator | Thursday 04 September 2025 00:50:12 +0000 (0:00:00.508) 0:01:20.609 **** 2025-09-04 00:51:22.959370 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.959379 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.959387 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.959396 | orchestrator | 2025-09-04 00:51:22.959405 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-04 00:51:22.959419 | orchestrator | Thursday 04 September 2025 00:50:12 +0000 (0:00:00.602) 0:01:21.211 **** 2025-09-04 00:51:22.959429 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959437 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959447 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959456 | orchestrator | 2025-09-04 00:51:22.959465 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-04 00:51:22.959474 | orchestrator | Thursday 04 September 2025 00:50:13 +0000 (0:00:00.568) 0:01:21.781 **** 2025-09-04 00:51:22.959483 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959492 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959501 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959510 | orchestrator | 2025-09-04 00:51:22.959519 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-04 00:51:22.959528 | orchestrator | Thursday 04 September 2025 00:50:13 +0000 (0:00:00.367) 0:01:22.148 **** 2025-09-04 00:51:22.959537 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959546 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959555 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959565 | orchestrator | 2025-09-04 00:51:22.959573 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-04 00:51:22.959582 | orchestrator | Thursday 04 September 2025 00:50:14 +0000 (0:00:00.322) 0:01:22.470 **** 2025-09-04 00:51:22.959591 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959600 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959609 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959618 | orchestrator | 2025-09-04 00:51:22.959627 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-04 00:51:22.959636 | orchestrator | Thursday 04 September 2025 00:50:14 +0000 (0:00:00.319) 0:01:22.790 **** 2025-09-04 00:51:22.959645 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959654 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959663 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959672 | orchestrator | 2025-09-04 00:51:22.959681 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-04 00:51:22.959690 | orchestrator | Thursday 04 September 2025 00:50:15 +0000 (0:00:00.523) 0:01:23.314 **** 2025-09-04 00:51:22.959699 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.959708 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.959717 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.959726 | orchestrator | 2025-09-04 00:51:22.959735 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-04 00:51:22.959744 | orchestrator | Thursday 04 September 2025 00:50:15 +0000 (0:00:00.400) 0:01:23.714 **** 2025-09-04 00:51:22.959784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959884 | orchestrator | 2025-09-04 00:51:22.959892 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-04 00:51:22.959900 | orchestrator | Thursday 04 September 2025 00:50:17 +0000 (0:00:01.553) 0:01:25.267 **** 2025-09-04 00:51:22.959908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.959994 | orchestrator | 2025-09-04 00:51:22.960002 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-04 00:51:22.960011 | orchestrator | Thursday 04 September 2025 00:50:21 +0000 (0:00:04.142) 0:01:29.409 **** 2025-09-04 00:51:22.960019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960107 | orchestrator | 2025-09-04 00:51:22.960115 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.960123 | orchestrator | Thursday 04 September 2025 00:50:23 +0000 (0:00:02.345) 0:01:31.755 **** 2025-09-04 00:51:22.960130 | orchestrator | 2025-09-04 00:51:22.960138 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.960146 | orchestrator | Thursday 04 September 2025 00:50:23 +0000 (0:00:00.275) 0:01:32.030 **** 2025-09-04 00:51:22.960154 | orchestrator | 2025-09-04 00:51:22.960162 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.960169 | orchestrator | Thursday 04 September 2025 00:50:24 +0000 (0:00:00.240) 0:01:32.270 **** 2025-09-04 00:51:22.960177 | orchestrator | 2025-09-04 00:51:22.960185 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-04 00:51:22.960192 | orchestrator | Thursday 04 September 2025 00:50:24 +0000 (0:00:00.150) 0:01:32.420 **** 2025-09-04 00:51:22.960201 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.960209 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.960217 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.960225 | orchestrator | 2025-09-04 00:51:22.960233 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-04 00:51:22.960241 | orchestrator | Thursday 04 September 2025 00:50:31 +0000 (0:00:06.988) 0:01:39.408 **** 2025-09-04 00:51:22.960249 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.960257 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.960265 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.960277 | orchestrator | 2025-09-04 00:51:22.960285 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-04 00:51:22.960293 | orchestrator | Thursday 04 September 2025 00:50:38 +0000 (0:00:07.533) 0:01:46.942 **** 2025-09-04 00:51:22.960301 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.960323 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.960330 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.960337 | orchestrator | 2025-09-04 00:51:22.960344 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-04 00:51:22.960351 | orchestrator | Thursday 04 September 2025 00:50:41 +0000 (0:00:02.433) 0:01:49.376 **** 2025-09-04 00:51:22.960359 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.960366 | orchestrator | 2025-09-04 00:51:22.960373 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-04 00:51:22.960380 | orchestrator | Thursday 04 September 2025 00:50:41 +0000 (0:00:00.137) 0:01:49.513 **** 2025-09-04 00:51:22.960387 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.960394 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.960401 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.960408 | orchestrator | 2025-09-04 00:51:22.960416 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-04 00:51:22.960423 | orchestrator | Thursday 04 September 2025 00:50:42 +0000 (0:00:01.040) 0:01:50.553 **** 2025-09-04 00:51:22.960430 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.960437 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.960444 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.960452 | orchestrator | 2025-09-04 00:51:22.960459 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-04 00:51:22.960466 | orchestrator | Thursday 04 September 2025 00:50:42 +0000 (0:00:00.639) 0:01:51.193 **** 2025-09-04 00:51:22.960473 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.960480 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.960487 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.960495 | orchestrator | 2025-09-04 00:51:22.960502 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-04 00:51:22.960509 | orchestrator | Thursday 04 September 2025 00:50:43 +0000 (0:00:00.788) 0:01:51.981 **** 2025-09-04 00:51:22.960516 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.960523 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.960530 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.960537 | orchestrator | 2025-09-04 00:51:22.960545 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-04 00:51:22.960552 | orchestrator | Thursday 04 September 2025 00:50:44 +0000 (0:00:00.771) 0:01:52.753 **** 2025-09-04 00:51:22.960559 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.960566 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.960577 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.960584 | orchestrator | 2025-09-04 00:51:22.960592 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-04 00:51:22.960599 | orchestrator | Thursday 04 September 2025 00:50:46 +0000 (0:00:01.533) 0:01:54.286 **** 2025-09-04 00:51:22.960606 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.960613 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.960620 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.960627 | orchestrator | 2025-09-04 00:51:22.960634 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-04 00:51:22.960642 | orchestrator | Thursday 04 September 2025 00:50:46 +0000 (0:00:00.751) 0:01:55.038 **** 2025-09-04 00:51:22.960649 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.960656 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.960663 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.960670 | orchestrator | 2025-09-04 00:51:22.960677 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-04 00:51:22.960684 | orchestrator | Thursday 04 September 2025 00:50:47 +0000 (0:00:00.314) 0:01:55.353 **** 2025-09-04 00:51:22.960696 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960707 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960715 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960722 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960730 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960738 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960745 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960764 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960772 | orchestrator | 2025-09-04 00:51:22.960779 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-04 00:51:22.960786 | orchestrator | Thursday 04 September 2025 00:50:48 +0000 (0:00:01.456) 0:01:56.809 **** 2025-09-04 00:51:22.960794 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960805 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960868 | orchestrator | 2025-09-04 00:51:22.960875 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-04 00:51:22.960882 | orchestrator | Thursday 04 September 2025 00:50:52 +0000 (0:00:04.305) 0:02:01.115 **** 2025-09-04 00:51:22.960894 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960915 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960964 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 00:51:22.960971 | orchestrator | 2025-09-04 00:51:22.960978 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.960985 | orchestrator | Thursday 04 September 2025 00:50:55 +0000 (0:00:02.904) 0:02:04.019 **** 2025-09-04 00:51:22.960993 | orchestrator | 2025-09-04 00:51:22.961000 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.961007 | orchestrator | Thursday 04 September 2025 00:50:55 +0000 (0:00:00.064) 0:02:04.083 **** 2025-09-04 00:51:22.961018 | orchestrator | 2025-09-04 00:51:22.961025 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-04 00:51:22.961032 | orchestrator | Thursday 04 September 2025 00:50:55 +0000 (0:00:00.063) 0:02:04.147 **** 2025-09-04 00:51:22.961039 | orchestrator | 2025-09-04 00:51:22.961046 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-04 00:51:22.961054 | orchestrator | Thursday 04 September 2025 00:50:55 +0000 (0:00:00.064) 0:02:04.212 **** 2025-09-04 00:51:22.961061 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.961068 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.961075 | orchestrator | 2025-09-04 00:51:22.961085 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-04 00:51:22.961093 | orchestrator | Thursday 04 September 2025 00:51:02 +0000 (0:00:06.132) 0:02:10.344 **** 2025-09-04 00:51:22.961100 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.961107 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.961114 | orchestrator | 2025-09-04 00:51:22.961121 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-04 00:51:22.961128 | orchestrator | Thursday 04 September 2025 00:51:08 +0000 (0:00:06.188) 0:02:16.532 **** 2025-09-04 00:51:22.961135 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:51:22.961142 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:51:22.961149 | orchestrator | 2025-09-04 00:51:22.961157 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-04 00:51:22.961164 | orchestrator | Thursday 04 September 2025 00:51:14 +0000 (0:00:06.623) 0:02:23.156 **** 2025-09-04 00:51:22.961171 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:51:22.961178 | orchestrator | 2025-09-04 00:51:22.961185 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-04 00:51:22.961192 | orchestrator | Thursday 04 September 2025 00:51:15 +0000 (0:00:00.127) 0:02:23.283 **** 2025-09-04 00:51:22.961199 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.961207 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.961214 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.961221 | orchestrator | 2025-09-04 00:51:22.961228 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-04 00:51:22.961235 | orchestrator | Thursday 04 September 2025 00:51:15 +0000 (0:00:00.735) 0:02:24.019 **** 2025-09-04 00:51:22.961246 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.961253 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.961260 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.961267 | orchestrator | 2025-09-04 00:51:22.961274 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-04 00:51:22.961281 | orchestrator | Thursday 04 September 2025 00:51:16 +0000 (0:00:00.619) 0:02:24.639 **** 2025-09-04 00:51:22.961288 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.961296 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.961321 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.961328 | orchestrator | 2025-09-04 00:51:22.961335 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-04 00:51:22.961342 | orchestrator | Thursday 04 September 2025 00:51:17 +0000 (0:00:00.790) 0:02:25.429 **** 2025-09-04 00:51:22.961350 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:51:22.961357 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:51:22.961364 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:51:22.961371 | orchestrator | 2025-09-04 00:51:22.961378 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-04 00:51:22.961386 | orchestrator | Thursday 04 September 2025 00:51:18 +0000 (0:00:00.885) 0:02:26.315 **** 2025-09-04 00:51:22.961393 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.961400 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.961407 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.961415 | orchestrator | 2025-09-04 00:51:22.961422 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-04 00:51:22.961429 | orchestrator | Thursday 04 September 2025 00:51:18 +0000 (0:00:00.732) 0:02:27.047 **** 2025-09-04 00:51:22.961441 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:51:22.961448 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:51:22.961455 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:51:22.961463 | orchestrator | 2025-09-04 00:51:22.961470 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:51:22.961477 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-04 00:51:22.961485 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-04 00:51:22.961492 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-04 00:51:22.961500 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:51:22.961507 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:51:22.961514 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:51:22.961522 | orchestrator | 2025-09-04 00:51:22.961529 | orchestrator | 2025-09-04 00:51:22.961536 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:51:22.961544 | orchestrator | Thursday 04 September 2025 00:51:19 +0000 (0:00:00.876) 0:02:27.924 **** 2025-09-04 00:51:22.961551 | orchestrator | =============================================================================== 2025-09-04 00:51:22.961558 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.86s 2025-09-04 00:51:22.961565 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.62s 2025-09-04 00:51:22.961572 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.72s 2025-09-04 00:51:22.961580 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.12s 2025-09-04 00:51:22.961587 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.06s 2025-09-04 00:51:22.961594 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.31s 2025-09-04 00:51:22.961601 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.14s 2025-09-04 00:51:22.961612 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.90s 2025-09-04 00:51:22.961619 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.73s 2025-09-04 00:51:22.961626 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.39s 2025-09-04 00:51:22.961634 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.35s 2025-09-04 00:51:22.961641 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.11s 2025-09-04 00:51:22.961648 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.72s 2025-09-04 00:51:22.961655 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.65s 2025-09-04 00:51:22.961662 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.58s 2025-09-04 00:51:22.961670 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.57s 2025-09-04 00:51:22.961677 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2025-09-04 00:51:22.961684 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.53s 2025-09-04 00:51:22.961691 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-09-04 00:51:22.961699 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.23s 2025-09-04 00:51:22.961713 | orchestrator | 2025-09-04 00:51:22 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:22.961721 | orchestrator | 2025-09-04 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:25.998112 | orchestrator | 2025-09-04 00:51:25 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:25.999153 | orchestrator | 2025-09-04 00:51:25 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:25.999185 | orchestrator | 2025-09-04 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:29.048669 | orchestrator | 2025-09-04 00:51:29 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:29.050580 | orchestrator | 2025-09-04 00:51:29 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:29.050633 | orchestrator | 2025-09-04 00:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:32.084572 | orchestrator | 2025-09-04 00:51:32 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:32.085677 | orchestrator | 2025-09-04 00:51:32 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:32.085918 | orchestrator | 2025-09-04 00:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:35.119861 | orchestrator | 2025-09-04 00:51:35 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:35.122542 | orchestrator | 2025-09-04 00:51:35 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:35.122591 | orchestrator | 2025-09-04 00:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:38.169735 | orchestrator | 2025-09-04 00:51:38 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:38.170494 | orchestrator | 2025-09-04 00:51:38 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:38.170617 | orchestrator | 2025-09-04 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:41.212920 | orchestrator | 2025-09-04 00:51:41 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:41.213109 | orchestrator | 2025-09-04 00:51:41 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:41.213206 | orchestrator | 2025-09-04 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:44.255399 | orchestrator | 2025-09-04 00:51:44 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:44.256505 | orchestrator | 2025-09-04 00:51:44 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:44.256533 | orchestrator | 2025-09-04 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:47.300824 | orchestrator | 2025-09-04 00:51:47 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:47.302870 | orchestrator | 2025-09-04 00:51:47 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:47.303366 | orchestrator | 2025-09-04 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:50.357216 | orchestrator | 2025-09-04 00:51:50 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:50.359572 | orchestrator | 2025-09-04 00:51:50 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:50.359622 | orchestrator | 2025-09-04 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:53.391098 | orchestrator | 2025-09-04 00:51:53 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:53.391869 | orchestrator | 2025-09-04 00:51:53 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:53.392510 | orchestrator | 2025-09-04 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:56.433702 | orchestrator | 2025-09-04 00:51:56 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:56.433922 | orchestrator | 2025-09-04 00:51:56 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:56.433942 | orchestrator | 2025-09-04 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:51:59.468799 | orchestrator | 2025-09-04 00:51:59 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:51:59.470401 | orchestrator | 2025-09-04 00:51:59 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:51:59.470880 | orchestrator | 2025-09-04 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:02.516181 | orchestrator | 2025-09-04 00:52:02 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:02.518156 | orchestrator | 2025-09-04 00:52:02 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:02.518646 | orchestrator | 2025-09-04 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:05.558374 | orchestrator | 2025-09-04 00:52:05 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:05.558645 | orchestrator | 2025-09-04 00:52:05 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:05.558667 | orchestrator | 2025-09-04 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:08.592551 | orchestrator | 2025-09-04 00:52:08 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:08.592936 | orchestrator | 2025-09-04 00:52:08 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:08.592968 | orchestrator | 2025-09-04 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:11.643687 | orchestrator | 2025-09-04 00:52:11 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:11.643784 | orchestrator | 2025-09-04 00:52:11 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:11.643800 | orchestrator | 2025-09-04 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:14.689002 | orchestrator | 2025-09-04 00:52:14 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:14.692077 | orchestrator | 2025-09-04 00:52:14 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:14.692110 | orchestrator | 2025-09-04 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:17.731697 | orchestrator | 2025-09-04 00:52:17 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:17.732151 | orchestrator | 2025-09-04 00:52:17 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:17.732179 | orchestrator | 2025-09-04 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:20.768772 | orchestrator | 2025-09-04 00:52:20 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:20.769749 | orchestrator | 2025-09-04 00:52:20 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:20.769776 | orchestrator | 2025-09-04 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:23.818892 | orchestrator | 2025-09-04 00:52:23 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:23.820617 | orchestrator | 2025-09-04 00:52:23 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:23.820655 | orchestrator | 2025-09-04 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:26.869903 | orchestrator | 2025-09-04 00:52:26 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:26.872494 | orchestrator | 2025-09-04 00:52:26 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:26.872712 | orchestrator | 2025-09-04 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:29.906357 | orchestrator | 2025-09-04 00:52:29 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:29.907756 | orchestrator | 2025-09-04 00:52:29 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:29.907894 | orchestrator | 2025-09-04 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:32.944736 | orchestrator | 2025-09-04 00:52:32 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:32.947422 | orchestrator | 2025-09-04 00:52:32 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:32.948459 | orchestrator | 2025-09-04 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:35.987670 | orchestrator | 2025-09-04 00:52:35 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:35.990670 | orchestrator | 2025-09-04 00:52:35 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:35.990716 | orchestrator | 2025-09-04 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:39.033407 | orchestrator | 2025-09-04 00:52:39 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:39.033534 | orchestrator | 2025-09-04 00:52:39 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:39.033550 | orchestrator | 2025-09-04 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:42.072214 | orchestrator | 2025-09-04 00:52:42 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:42.074543 | orchestrator | 2025-09-04 00:52:42 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:42.074578 | orchestrator | 2025-09-04 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:45.118860 | orchestrator | 2025-09-04 00:52:45 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:45.121629 | orchestrator | 2025-09-04 00:52:45 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:45.122178 | orchestrator | 2025-09-04 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:48.176400 | orchestrator | 2025-09-04 00:52:48 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:48.180861 | orchestrator | 2025-09-04 00:52:48 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:48.180895 | orchestrator | 2025-09-04 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:51.221786 | orchestrator | 2025-09-04 00:52:51 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:51.223045 | orchestrator | 2025-09-04 00:52:51 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:51.223112 | orchestrator | 2025-09-04 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:54.268783 | orchestrator | 2025-09-04 00:52:54 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:54.269200 | orchestrator | 2025-09-04 00:52:54 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:54.269277 | orchestrator | 2025-09-04 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:52:57.325415 | orchestrator | 2025-09-04 00:52:57 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:52:57.327018 | orchestrator | 2025-09-04 00:52:57 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:52:57.329329 | orchestrator | 2025-09-04 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:00.373613 | orchestrator | 2025-09-04 00:53:00 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:00.376979 | orchestrator | 2025-09-04 00:53:00 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:00.377045 | orchestrator | 2025-09-04 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:03.415114 | orchestrator | 2025-09-04 00:53:03 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:03.416883 | orchestrator | 2025-09-04 00:53:03 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:03.416918 | orchestrator | 2025-09-04 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:06.447774 | orchestrator | 2025-09-04 00:53:06 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:06.447881 | orchestrator | 2025-09-04 00:53:06 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:06.447896 | orchestrator | 2025-09-04 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:09.482908 | orchestrator | 2025-09-04 00:53:09 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:09.485118 | orchestrator | 2025-09-04 00:53:09 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:09.485590 | orchestrator | 2025-09-04 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:12.526111 | orchestrator | 2025-09-04 00:53:12 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:12.529073 | orchestrator | 2025-09-04 00:53:12 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:12.530191 | orchestrator | 2025-09-04 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:15.576640 | orchestrator | 2025-09-04 00:53:15 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:15.577432 | orchestrator | 2025-09-04 00:53:15 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:15.577464 | orchestrator | 2025-09-04 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:18.621910 | orchestrator | 2025-09-04 00:53:18 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:18.623125 | orchestrator | 2025-09-04 00:53:18 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:18.623152 | orchestrator | 2025-09-04 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:21.667000 | orchestrator | 2025-09-04 00:53:21 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:21.667532 | orchestrator | 2025-09-04 00:53:21 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:21.667561 | orchestrator | 2025-09-04 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:24.709793 | orchestrator | 2025-09-04 00:53:24 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:24.711387 | orchestrator | 2025-09-04 00:53:24 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:24.711419 | orchestrator | 2025-09-04 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:27.774098 | orchestrator | 2025-09-04 00:53:27 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:27.777673 | orchestrator | 2025-09-04 00:53:27 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:27.777722 | orchestrator | 2025-09-04 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:30.815145 | orchestrator | 2025-09-04 00:53:30 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:30.816583 | orchestrator | 2025-09-04 00:53:30 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:30.816615 | orchestrator | 2025-09-04 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:33.861862 | orchestrator | 2025-09-04 00:53:33 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:33.863170 | orchestrator | 2025-09-04 00:53:33 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:33.863313 | orchestrator | 2025-09-04 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:36.895773 | orchestrator | 2025-09-04 00:53:36 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:36.897433 | orchestrator | 2025-09-04 00:53:36 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:36.897473 | orchestrator | 2025-09-04 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:39.940673 | orchestrator | 2025-09-04 00:53:39 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:39.942518 | orchestrator | 2025-09-04 00:53:39 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:39.942560 | orchestrator | 2025-09-04 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:42.979470 | orchestrator | 2025-09-04 00:53:42 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:42.980694 | orchestrator | 2025-09-04 00:53:42 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:42.980730 | orchestrator | 2025-09-04 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:46.018519 | orchestrator | 2025-09-04 00:53:46 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:46.018620 | orchestrator | 2025-09-04 00:53:46 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:46.018635 | orchestrator | 2025-09-04 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:49.059667 | orchestrator | 2025-09-04 00:53:49 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:49.062148 | orchestrator | 2025-09-04 00:53:49 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:49.062220 | orchestrator | 2025-09-04 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:52.104722 | orchestrator | 2025-09-04 00:53:52 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:52.106320 | orchestrator | 2025-09-04 00:53:52 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:52.107333 | orchestrator | 2025-09-04 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:55.148149 | orchestrator | 2025-09-04 00:53:55 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:55.151570 | orchestrator | 2025-09-04 00:53:55 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:55.151650 | orchestrator | 2025-09-04 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:53:58.193589 | orchestrator | 2025-09-04 00:53:58 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state STARTED 2025-09-04 00:53:58.193693 | orchestrator | 2025-09-04 00:53:58 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:53:58.193721 | orchestrator | 2025-09-04 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:01.241311 | orchestrator | 2025-09-04 00:54:01 | INFO  | Task e1c05202-9a49-4ef7-a742-4a0d2c71bca3 is in state SUCCESS 2025-09-04 00:54:01.242319 | orchestrator | 2025-09-04 00:54:01.242351 | orchestrator | 2025-09-04 00:54:01.242363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:54:01.242376 | orchestrator | 2025-09-04 00:54:01.242387 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:54:01.242399 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:00.518) 0:00:00.518 **** 2025-09-04 00:54:01.242411 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.242424 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.242436 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.242519 | orchestrator | 2025-09-04 00:54:01.242531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:54:01.242543 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.493) 0:00:01.012 **** 2025-09-04 00:54:01.242555 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-04 00:54:01.242566 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-04 00:54:01.242578 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-04 00:54:01.242588 | orchestrator | 2025-09-04 00:54:01.242697 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-04 00:54:01.242709 | orchestrator | 2025-09-04 00:54:01.242720 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-04 00:54:01.242732 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.674) 0:00:01.687 **** 2025-09-04 00:54:01.242743 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.242754 | orchestrator | 2025-09-04 00:54:01.242765 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-04 00:54:01.242776 | orchestrator | Thursday 04 September 2025 00:47:38 +0000 (0:00:00.891) 0:00:02.579 **** 2025-09-04 00:54:01.242787 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.242798 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.242809 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.242820 | orchestrator | 2025-09-04 00:54:01.242831 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-04 00:54:01.242842 | orchestrator | Thursday 04 September 2025 00:47:39 +0000 (0:00:00.830) 0:00:03.409 **** 2025-09-04 00:54:01.242853 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.242864 | orchestrator | 2025-09-04 00:54:01.242875 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-04 00:54:01.242909 | orchestrator | Thursday 04 September 2025 00:47:40 +0000 (0:00:01.049) 0:00:04.459 **** 2025-09-04 00:54:01.242921 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.242934 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.242946 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.242960 | orchestrator | 2025-09-04 00:54:01.244852 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-04 00:54:01.244867 | orchestrator | Thursday 04 September 2025 00:47:41 +0000 (0:00:00.775) 0:00:05.235 **** 2025-09-04 00:54:01.244878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244922 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-04 00:54:01.244943 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-04 00:54:01.244955 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-04 00:54:01.244966 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-04 00:54:01.244977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-04 00:54:01.244987 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-04 00:54:01.245004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-04 00:54:01.245015 | orchestrator | 2025-09-04 00:54:01.245026 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-04 00:54:01.245037 | orchestrator | Thursday 04 September 2025 00:47:44 +0000 (0:00:02.656) 0:00:07.891 **** 2025-09-04 00:54:01.245048 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-04 00:54:01.245059 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-04 00:54:01.245136 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-04 00:54:01.245148 | orchestrator | 2025-09-04 00:54:01.245159 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-04 00:54:01.245324 | orchestrator | Thursday 04 September 2025 00:47:44 +0000 (0:00:00.871) 0:00:08.763 **** 2025-09-04 00:54:01.245338 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-04 00:54:01.245349 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-04 00:54:01.245360 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-04 00:54:01.245371 | orchestrator | 2025-09-04 00:54:01.245382 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-04 00:54:01.245392 | orchestrator | Thursday 04 September 2025 00:47:47 +0000 (0:00:02.048) 0:00:10.811 **** 2025-09-04 00:54:01.245405 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-04 00:54:01.245419 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.245446 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-04 00:54:01.245460 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.245473 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-04 00:54:01.245486 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.245498 | orchestrator | 2025-09-04 00:54:01.245509 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-04 00:54:01.245521 | orchestrator | Thursday 04 September 2025 00:47:47 +0000 (0:00:00.626) 0:00:11.437 **** 2025-09-04 00:54:01.245535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.245801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.245821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.245831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.245841 | orchestrator | 2025-09-04 00:54:01.245851 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-04 00:54:01.245861 | orchestrator | Thursday 04 September 2025 00:47:51 +0000 (0:00:03.353) 0:00:14.791 **** 2025-09-04 00:54:01.245871 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.245881 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.245891 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.245900 | orchestrator | 2025-09-04 00:54:01.245910 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-04 00:54:01.245920 | orchestrator | Thursday 04 September 2025 00:47:52 +0000 (0:00:01.854) 0:00:16.645 **** 2025-09-04 00:54:01.245929 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-04 00:54:01.245940 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-04 00:54:01.245949 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-04 00:54:01.245959 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-04 00:54:01.245969 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-04 00:54:01.245978 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-04 00:54:01.245988 | orchestrator | 2025-09-04 00:54:01.245998 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-04 00:54:01.246008 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:02.053) 0:00:18.698 **** 2025-09-04 00:54:01.246210 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.246227 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.246237 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.246246 | orchestrator | 2025-09-04 00:54:01.246256 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-04 00:54:01.246266 | orchestrator | Thursday 04 September 2025 00:47:55 +0000 (0:00:01.055) 0:00:19.754 **** 2025-09-04 00:54:01.246275 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.246285 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.246294 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.246304 | orchestrator | 2025-09-04 00:54:01.246314 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-04 00:54:01.246323 | orchestrator | Thursday 04 September 2025 00:47:58 +0000 (0:00:02.383) 0:00:22.138 **** 2025-09-04 00:54:01.246338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.246365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.246376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246397 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.246407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.246418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.246428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246453 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.246471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.246481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.246512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246533 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.246543 | orchestrator | 2025-09-04 00:54:01.246553 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-04 00:54:01.246562 | orchestrator | Thursday 04 September 2025 00:48:00 +0000 (0:00:02.247) 0:00:24.385 **** 2025-09-04 00:54:01.246576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.246793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d', '__omit_place_holder__6c2324d6ee676d84d1cd9e6bd63aea10f108393d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-04 00:54:01.246803 | orchestrator | 2025-09-04 00:54:01.246813 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-04 00:54:01.246823 | orchestrator | Thursday 04 September 2025 00:48:03 +0000 (0:00:03.379) 0:00:27.764 **** 2025-09-04 00:54:01.246833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.246975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.246985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.247009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.247019 | orchestrator | 2025-09-04 00:54:01.247029 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-04 00:54:01.247039 | orchestrator | Thursday 04 September 2025 00:48:07 +0000 (0:00:03.746) 0:00:31.511 **** 2025-09-04 00:54:01.247048 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-04 00:54:01.247058 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-04 00:54:01.247068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-04 00:54:01.247078 | orchestrator | 2025-09-04 00:54:01.247087 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-04 00:54:01.247097 | orchestrator | Thursday 04 September 2025 00:48:11 +0000 (0:00:03.316) 0:00:34.827 **** 2025-09-04 00:54:01.247107 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-04 00:54:01.247116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-04 00:54:01.247126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-04 00:54:01.247136 | orchestrator | 2025-09-04 00:54:01.247156 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-04 00:54:01.247166 | orchestrator | Thursday 04 September 2025 00:48:17 +0000 (0:00:06.134) 0:00:40.962 **** 2025-09-04 00:54:01.247191 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.247201 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.247211 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.247221 | orchestrator | 2025-09-04 00:54:01.247230 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-04 00:54:01.247289 | orchestrator | Thursday 04 September 2025 00:48:17 +0000 (0:00:00.779) 0:00:41.741 **** 2025-09-04 00:54:01.247300 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-04 00:54:01.247397 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-04 00:54:01.247410 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-04 00:54:01.247420 | orchestrator | 2025-09-04 00:54:01.247430 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-04 00:54:01.247440 | orchestrator | Thursday 04 September 2025 00:48:20 +0000 (0:00:02.736) 0:00:44.478 **** 2025-09-04 00:54:01.247449 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-04 00:54:01.247459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-04 00:54:01.247469 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-04 00:54:01.247484 | orchestrator | 2025-09-04 00:54:01.247494 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-04 00:54:01.247504 | orchestrator | Thursday 04 September 2025 00:48:24 +0000 (0:00:03.332) 0:00:47.810 **** 2025-09-04 00:54:01.247514 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-04 00:54:01.247523 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-04 00:54:01.247533 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-04 00:54:01.247543 | orchestrator | 2025-09-04 00:54:01.247552 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-04 00:54:01.247562 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:01.888) 0:00:49.699 **** 2025-09-04 00:54:01.247572 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-04 00:54:01.247581 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-04 00:54:01.247591 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-04 00:54:01.247601 | orchestrator | 2025-09-04 00:54:01.247610 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-04 00:54:01.247620 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:02.139) 0:00:51.838 **** 2025-09-04 00:54:01.247629 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.247639 | orchestrator | 2025-09-04 00:54:01.247649 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-04 00:54:01.247658 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:00.674) 0:00:52.513 **** 2025-09-04 00:54:01.247673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.247747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.247761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.247772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.247782 | orchestrator | 2025-09-04 00:54:01.247792 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-04 00:54:01.247802 | orchestrator | Thursday 04 September 2025 00:48:33 +0000 (0:00:04.524) 0:00:57.037 **** 2025-09-04 00:54:01.247818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.247833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.247844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.247854 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.247864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.247875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.247889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.247899 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.247909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.247925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.247940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.247951 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.247960 | orchestrator | 2025-09-04 00:54:01.247970 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-04 00:54:01.247980 | orchestrator | Thursday 04 September 2025 00:48:34 +0000 (0:00:00.767) 0:00:57.804 **** 2025-09-04 00:54:01.247990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248024 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248075 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.248085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248116 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.248126 | orchestrator | 2025-09-04 00:54:01.248135 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-04 00:54:01.248145 | orchestrator | Thursday 04 September 2025 00:48:35 +0000 (0:00:01.183) 0:00:58.988 **** 2025-09-04 00:54:01.248159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248214 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248255 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.248265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248311 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.248320 | orchestrator | 2025-09-04 00:54:01.248330 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-04 00:54:01.248340 | orchestrator | Thursday 04 September 2025 00:48:37 +0000 (0:00:02.677) 0:01:01.665 **** 2025-09-04 00:54:01.248350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248381 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248434 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.248449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248480 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.248490 | orchestrator | 2025-09-04 00:54:01.248499 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-04 00:54:01.248509 | orchestrator | Thursday 04 September 2025 00:48:39 +0000 (0:00:01.820) 0:01:03.486 **** 2025-09-04 00:54:01.248519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248559 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248605 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.248614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248654 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.248664 | orchestrator | 2025-09-04 00:54:01.248674 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-04 00:54:01.248684 | orchestrator | Thursday 04 September 2025 00:48:40 +0000 (0:00:01.111) 0:01:04.598 **** 2025-09-04 00:54:01.248694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248730 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.248740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248775 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248825 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.248835 | orchestrator | 2025-09-04 00:54:01.248844 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-04 00:54:01.248854 | orchestrator | Thursday 04 September 2025 00:48:41 +0000 (0:00:01.075) 0:01:05.673 **** 2025-09-04 00:54:01.248864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.248891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.248901 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.248914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.248925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.250217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.250297 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.250315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.250329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.250366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.250378 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.250389 | orchestrator | 2025-09-04 00:54:01.250401 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-04 00:54:01.250413 | orchestrator | Thursday 04 September 2025 00:48:42 +0000 (0:00:00.751) 0:01:06.425 **** 2025-09-04 00:54:01.250424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.250447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.250459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.250471 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.250497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.250510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.250530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.250542 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.250553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-04 00:54:01.250569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-04 00:54:01.250581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-04 00:54:01.250593 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.250604 | orchestrator | 2025-09-04 00:54:01.250615 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-04 00:54:01.250626 | orchestrator | Thursday 04 September 2025 00:48:43 +0000 (0:00:00.947) 0:01:07.373 **** 2025-09-04 00:54:01.250637 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-04 00:54:01.250649 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-04 00:54:01.250667 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-04 00:54:01.250679 | orchestrator | 2025-09-04 00:54:01.250690 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-04 00:54:01.250701 | orchestrator | Thursday 04 September 2025 00:48:45 +0000 (0:00:02.064) 0:01:09.438 **** 2025-09-04 00:54:01.250712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-04 00:54:01.250723 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-04 00:54:01.250735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-04 00:54:01.250746 | orchestrator | 2025-09-04 00:54:01.250757 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-04 00:54:01.250768 | orchestrator | Thursday 04 September 2025 00:48:47 +0000 (0:00:01.403) 0:01:10.841 **** 2025-09-04 00:54:01.250785 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 00:54:01.250796 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 00:54:01.250806 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 00:54:01.250817 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.250828 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 00:54:01.250839 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 00:54:01.250850 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.250861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 00:54:01.250871 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.250882 | orchestrator | 2025-09-04 00:54:01.250893 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-04 00:54:01.250904 | orchestrator | Thursday 04 September 2025 00:48:48 +0000 (0:00:01.326) 0:01:12.168 **** 2025-09-04 00:54:01.250915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.250927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.250939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-04 00:54:01.250959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.250971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.250988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-04 00:54:01.251000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.251035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.251048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-04 00:54:01.251059 | orchestrator | 2025-09-04 00:54:01.251071 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-04 00:54:01.251081 | orchestrator | Thursday 04 September 2025 00:48:51 +0000 (0:00:02.861) 0:01:15.029 **** 2025-09-04 00:54:01.251096 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.251107 | orchestrator | 2025-09-04 00:54:01.251118 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-04 00:54:01.251129 | orchestrator | Thursday 04 September 2025 00:48:52 +0000 (0:00:00.972) 0:01:16.002 **** 2025-09-04 00:54:01.251141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-04 00:54:01.251167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-04 00:54:01.251242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-04 00:54:01.251303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251337 | orchestrator | 2025-09-04 00:54:01.251347 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-04 00:54:01.251358 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:05.826) 0:01:21.829 **** 2025-09-04 00:54:01.251374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-04 00:54:01.251397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251431 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.251443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-04 00:54:01.251454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-04 00:54:01.251500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.251523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251545 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.251557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251568 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.251579 | orchestrator | 2025-09-04 00:54:01.251589 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-04 00:54:01.251600 | orchestrator | Thursday 04 September 2025 00:48:59 +0000 (0:00:01.240) 0:01:23.070 **** 2025-09-04 00:54:01.251612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251644 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.251655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251678 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.251688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-04 00:54:01.251711 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.251722 | orchestrator | 2025-09-04 00:54:01.251738 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-04 00:54:01.251750 | orchestrator | Thursday 04 September 2025 00:49:00 +0000 (0:00:01.482) 0:01:24.552 **** 2025-09-04 00:54:01.251760 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.251771 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.251782 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.251792 | orchestrator | 2025-09-04 00:54:01.251803 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-04 00:54:01.251814 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:01.440) 0:01:25.992 **** 2025-09-04 00:54:01.251824 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.251835 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.251846 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.251856 | orchestrator | 2025-09-04 00:54:01.251867 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-04 00:54:01.251878 | orchestrator | Thursday 04 September 2025 00:49:04 +0000 (0:00:02.044) 0:01:28.037 **** 2025-09-04 00:54:01.251888 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.251899 | orchestrator | 2025-09-04 00:54:01.251909 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-04 00:54:01.251920 | orchestrator | Thursday 04 September 2025 00:49:04 +0000 (0:00:00.700) 0:01:28.738 **** 2025-09-04 00:54:01.251932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.251944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.251979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.251998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.252010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252067 | orchestrator | 2025-09-04 00:54:01.252078 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-04 00:54:01.252088 | orchestrator | Thursday 04 September 2025 00:49:08 +0000 (0:00:03.774) 0:01:32.512 **** 2025-09-04 00:54:01.252106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.252118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.252147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252189 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.252201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252212 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.252230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.252242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.252271 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.252282 | orchestrator | 2025-09-04 00:54:01.252293 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-04 00:54:01.252304 | orchestrator | Thursday 04 September 2025 00:49:09 +0000 (0:00:00.861) 0:01:33.374 **** 2025-09-04 00:54:01.252315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252338 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.252348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252370 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.252385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-04 00:54:01.252407 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.252418 | orchestrator | 2025-09-04 00:54:01.252429 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-04 00:54:01.252440 | orchestrator | Thursday 04 September 2025 00:49:10 +0000 (0:00:01.018) 0:01:34.392 **** 2025-09-04 00:54:01.252450 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.252461 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.252471 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.252482 | orchestrator | 2025-09-04 00:54:01.252493 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-04 00:54:01.252503 | orchestrator | Thursday 04 September 2025 00:49:11 +0000 (0:00:01.366) 0:01:35.758 **** 2025-09-04 00:54:01.252514 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.252525 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.252536 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.252546 | orchestrator | 2025-09-04 00:54:01.252562 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-04 00:54:01.252573 | orchestrator | Thursday 04 September 2025 00:49:14 +0000 (0:00:02.148) 0:01:37.906 **** 2025-09-04 00:54:01.252584 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.252595 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.252606 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.252617 | orchestrator | 2025-09-04 00:54:01.252627 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-04 00:54:01.252638 | orchestrator | Thursday 04 September 2025 00:49:14 +0000 (0:00:00.297) 0:01:38.203 **** 2025-09-04 00:54:01.252648 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.252665 | orchestrator | 2025-09-04 00:54:01.252676 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-04 00:54:01.252686 | orchestrator | Thursday 04 September 2025 00:49:15 +0000 (0:00:00.711) 0:01:38.914 **** 2025-09-04 00:54:01.252698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-04 00:54:01.252710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-04 00:54:01.252726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-04 00:54:01.252738 | orchestrator | 2025-09-04 00:54:01.252749 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-04 00:54:01.252760 | orchestrator | Thursday 04 September 2025 00:49:17 +0000 (0:00:02.765) 0:01:41.680 **** 2025-09-04 00:54:01.252776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-04 00:54:01.252788 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.252799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-04 00:54:01.252816 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.252828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-04 00:54:01.252839 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.252850 | orchestrator | 2025-09-04 00:54:01.252861 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-04 00:54:01.252871 | orchestrator | Thursday 04 September 2025 00:49:19 +0000 (0:00:01.480) 0:01:43.161 **** 2025-09-04 00:54:01.252883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.252896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.252908 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.252923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.252936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.252947 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.252973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.252995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-04 00:54:01.253006 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.253017 | orchestrator | 2025-09-04 00:54:01.253027 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-04 00:54:01.253038 | orchestrator | Thursday 04 September 2025 00:49:21 +0000 (0:00:01.810) 0:01:44.971 **** 2025-09-04 00:54:01.253049 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.253059 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.253070 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.253080 | orchestrator | 2025-09-04 00:54:01.253091 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-04 00:54:01.253102 | orchestrator | Thursday 04 September 2025 00:49:21 +0000 (0:00:00.707) 0:01:45.679 **** 2025-09-04 00:54:01.253112 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.253123 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.253133 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.253144 | orchestrator | 2025-09-04 00:54:01.253154 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-04 00:54:01.253165 | orchestrator | Thursday 04 September 2025 00:49:23 +0000 (0:00:01.295) 0:01:46.974 **** 2025-09-04 00:54:01.253214 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.253226 | orchestrator | 2025-09-04 00:54:01.253237 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-04 00:54:01.253248 | orchestrator | Thursday 04 September 2025 00:49:24 +0000 (0:00:00.820) 0:01:47.795 **** 2025-09-04 00:54:01.253259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.253270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.253350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.253414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253448 | orchestrator | 2025-09-04 00:54:01.253459 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-04 00:54:01.253470 | orchestrator | Thursday 04 September 2025 00:49:27 +0000 (0:00:03.589) 0:01:51.384 **** 2025-09-04 00:54:01.253486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.253503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253544 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.253556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.253567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.253619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253630 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.253641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.253685 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.253696 | orchestrator | 2025-09-04 00:54:01.253706 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-04 00:54:01.253717 | orchestrator | Thursday 04 September 2025 00:49:28 +0000 (0:00:00.947) 0:01:52.332 **** 2025-09-04 00:54:01.253729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253757 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.253768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253790 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.253801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-04 00:54:01.253822 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.253833 | orchestrator | 2025-09-04 00:54:01.253844 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-04 00:54:01.253854 | orchestrator | Thursday 04 September 2025 00:49:29 +0000 (0:00:01.066) 0:01:53.399 **** 2025-09-04 00:54:01.253865 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.253876 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.253886 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.253897 | orchestrator | 2025-09-04 00:54:01.253907 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-04 00:54:01.253918 | orchestrator | Thursday 04 September 2025 00:49:30 +0000 (0:00:01.323) 0:01:54.722 **** 2025-09-04 00:54:01.253929 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.253940 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.253950 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.253966 | orchestrator | 2025-09-04 00:54:01.253977 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-04 00:54:01.253988 | orchestrator | Thursday 04 September 2025 00:49:33 +0000 (0:00:02.630) 0:01:57.353 **** 2025-09-04 00:54:01.253998 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.254009 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.254049 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.254060 | orchestrator | 2025-09-04 00:54:01.254071 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-04 00:54:01.254081 | orchestrator | Thursday 04 September 2025 00:49:34 +0000 (0:00:00.571) 0:01:57.925 **** 2025-09-04 00:54:01.254092 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.254103 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.254113 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.254124 | orchestrator | 2025-09-04 00:54:01.254135 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-04 00:54:01.254145 | orchestrator | Thursday 04 September 2025 00:49:34 +0000 (0:00:00.352) 0:01:58.278 **** 2025-09-04 00:54:01.254156 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.254167 | orchestrator | 2025-09-04 00:54:01.254224 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-04 00:54:01.254236 | orchestrator | Thursday 04 September 2025 00:49:35 +0000 (0:00:00.869) 0:01:59.147 **** 2025-09-04 00:54:01.254253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 00:54:01.254280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 00:54:01.254323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 00:54:01.254479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254550 | orchestrator | 2025-09-04 00:54:01.254559 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-04 00:54:01.254569 | orchestrator | Thursday 04 September 2025 00:49:39 +0000 (0:00:04.373) 0:02:03.521 **** 2025-09-04 00:54:01.254585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 00:54:01.254601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 00:54:01.254673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254719 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.254729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254785 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.254795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 00:54:01.254805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 00:54:01.254815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.254879 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.254889 | orchestrator | 2025-09-04 00:54:01.254899 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-04 00:54:01.254908 | orchestrator | Thursday 04 September 2025 00:49:40 +0000 (0:00:00.897) 0:02:04.418 **** 2025-09-04 00:54:01.254918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254938 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.254948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254967 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.254977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-04 00:54:01.254996 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.255005 | orchestrator | 2025-09-04 00:54:01.255015 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-04 00:54:01.255024 | orchestrator | Thursday 04 September 2025 00:49:41 +0000 (0:00:00.998) 0:02:05.417 **** 2025-09-04 00:54:01.255034 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.255043 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.255053 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.255062 | orchestrator | 2025-09-04 00:54:01.255072 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-04 00:54:01.255081 | orchestrator | Thursday 04 September 2025 00:49:43 +0000 (0:00:01.767) 0:02:07.184 **** 2025-09-04 00:54:01.255090 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.255100 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.255114 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.255123 | orchestrator | 2025-09-04 00:54:01.255138 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-04 00:54:01.255148 | orchestrator | Thursday 04 September 2025 00:49:45 +0000 (0:00:01.799) 0:02:08.984 **** 2025-09-04 00:54:01.255157 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.255167 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.255190 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.255200 | orchestrator | 2025-09-04 00:54:01.255209 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-04 00:54:01.255219 | orchestrator | Thursday 04 September 2025 00:49:45 +0000 (0:00:00.526) 0:02:09.510 **** 2025-09-04 00:54:01.255228 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.255238 | orchestrator | 2025-09-04 00:54:01.255247 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-04 00:54:01.255257 | orchestrator | Thursday 04 September 2025 00:49:46 +0000 (0:00:00.788) 0:02:10.299 **** 2025-09-04 00:54:01.255277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 00:54:01.255295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 00:54:01.255331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 00:54:01.255372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255383 | orchestrator | 2025-09-04 00:54:01.255393 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-04 00:54:01.255403 | orchestrator | Thursday 04 September 2025 00:49:50 +0000 (0:00:04.210) 0:02:14.510 **** 2025-09-04 00:54:01.255423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 12025-09-04 00:54:01 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:01.255440 | orchestrator | 2025-09-04 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:01.255450 | orchestrator | 92.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 00:54:01.255461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255472 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.255487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 00:54:01.255510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255522 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.255536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 00:54:01.255560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.255571 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.255580 | orchestrator | 2025-09-04 00:54:01.255590 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-04 00:54:01.255600 | orchestrator | Thursday 04 September 2025 00:49:53 +0000 (0:00:03.072) 0:02:17.582 **** 2025-09-04 00:54:01.255610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255636 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.255647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255671 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.255687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-04 00:54:01.255707 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.255717 | orchestrator | 2025-09-04 00:54:01.255726 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-04 00:54:01.255736 | orchestrator | Thursday 04 September 2025 00:49:56 +0000 (0:00:03.076) 0:02:20.659 **** 2025-09-04 00:54:01.255746 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.255755 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.255764 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.255774 | orchestrator | 2025-09-04 00:54:01.255783 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-04 00:54:01.255793 | orchestrator | Thursday 04 September 2025 00:49:58 +0000 (0:00:01.264) 0:02:21.923 **** 2025-09-04 00:54:01.255802 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.255811 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.255821 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.255830 | orchestrator | 2025-09-04 00:54:01.255840 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-04 00:54:01.255849 | orchestrator | Thursday 04 September 2025 00:50:00 +0000 (0:00:02.047) 0:02:23.970 **** 2025-09-04 00:54:01.255858 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.255868 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.255883 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.255893 | orchestrator | 2025-09-04 00:54:01.255903 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-04 00:54:01.255912 | orchestrator | Thursday 04 September 2025 00:50:00 +0000 (0:00:00.547) 0:02:24.518 **** 2025-09-04 00:54:01.255921 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.255931 | orchestrator | 2025-09-04 00:54:01.255940 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-04 00:54:01.255950 | orchestrator | Thursday 04 September 2025 00:50:01 +0000 (0:00:00.885) 0:02:25.404 **** 2025-09-04 00:54:01.255960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 00:54:01.255975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 00:54:01.255986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 00:54:01.255996 | orchestrator | 2025-09-04 00:54:01.256005 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-04 00:54:01.256015 | orchestrator | Thursday 04 September 2025 00:50:04 +0000 (0:00:03.330) 0:02:28.735 **** 2025-09-04 00:54:01.256031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 00:54:01.256042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 00:54:01.256057 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.256067 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.256077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 00:54:01.256087 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.256097 | orchestrator | 2025-09-04 00:54:01.256106 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-04 00:54:01.256116 | orchestrator | Thursday 04 September 2025 00:50:05 +0000 (0:00:00.687) 0:02:29.422 **** 2025-09-04 00:54:01.256125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256145 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.256154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256186 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.256200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-04 00:54:01.256220 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.256229 | orchestrator | 2025-09-04 00:54:01.256239 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-04 00:54:01.256248 | orchestrator | Thursday 04 September 2025 00:50:06 +0000 (0:00:00.672) 0:02:30.094 **** 2025-09-04 00:54:01.256258 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.256267 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.256277 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.256286 | orchestrator | 2025-09-04 00:54:01.256296 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-04 00:54:01.256305 | orchestrator | Thursday 04 September 2025 00:50:07 +0000 (0:00:01.291) 0:02:31.385 **** 2025-09-04 00:54:01.256314 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.256324 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.256333 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.256343 | orchestrator | 2025-09-04 00:54:01.256357 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-04 00:54:01.256374 | orchestrator | Thursday 04 September 2025 00:50:09 +0000 (0:00:02.066) 0:02:33.452 **** 2025-09-04 00:54:01.256383 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.256393 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.256402 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.256412 | orchestrator | 2025-09-04 00:54:01.256421 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-04 00:54:01.256431 | orchestrator | Thursday 04 September 2025 00:50:10 +0000 (0:00:00.565) 0:02:34.018 **** 2025-09-04 00:54:01.256440 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.256450 | orchestrator | 2025-09-04 00:54:01.256459 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-04 00:54:01.256468 | orchestrator | Thursday 04 September 2025 00:50:11 +0000 (0:00:00.946) 0:02:34.965 **** 2025-09-04 00:54:01.256632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:54:01.256663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:54:01.256691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:54:01.256703 | orchestrator | 2025-09-04 00:54:01.256717 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-04 00:54:01.256727 | orchestrator | Thursday 04 September 2025 00:50:14 +0000 (0:00:03.695) 0:02:38.661 **** 2025-09-04 00:54:01.256738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:54:01.256754 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.256776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:54:01.256788 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.256804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:54:01.256814 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.256824 | orchestrator | 2025-09-04 00:54:01.256834 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-04 00:54:01.256844 | orchestrator | Thursday 04 September 2025 00:50:16 +0000 (0:00:01.206) 0:02:39.867 **** 2025-09-04 00:54:01.256858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.256870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.256880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.256894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.256906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.256921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.256932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.256942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-04 00:54:01.256952 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.256962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.256972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.256982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-04 00:54:01.256991 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.257001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.257011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-04 00:54:01.257026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-04 00:54:01.257036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-04 00:54:01.257045 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.257055 | orchestrator | 2025-09-04 00:54:01.257064 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-04 00:54:01.257074 | orchestrator | Thursday 04 September 2025 00:50:17 +0000 (0:00:00.947) 0:02:40.815 **** 2025-09-04 00:54:01.257084 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.257093 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.257102 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.257112 | orchestrator | 2025-09-04 00:54:01.257121 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-04 00:54:01.257136 | orchestrator | Thursday 04 September 2025 00:50:18 +0000 (0:00:01.571) 0:02:42.386 **** 2025-09-04 00:54:01.257146 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.257155 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.257165 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.257191 | orchestrator | 2025-09-04 00:54:01.257201 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-04 00:54:01.257211 | orchestrator | Thursday 04 September 2025 00:50:20 +0000 (0:00:02.035) 0:02:44.422 **** 2025-09-04 00:54:01.257222 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.257233 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.257248 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.257260 | orchestrator | 2025-09-04 00:54:01.257272 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-04 00:54:01.257283 | orchestrator | Thursday 04 September 2025 00:50:20 +0000 (0:00:00.325) 0:02:44.747 **** 2025-09-04 00:54:01.257294 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.257305 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.257316 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.257327 | orchestrator | 2025-09-04 00:54:01.257338 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-04 00:54:01.257349 | orchestrator | Thursday 04 September 2025 00:50:21 +0000 (0:00:00.682) 0:02:45.430 **** 2025-09-04 00:54:01.257361 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.257372 | orchestrator | 2025-09-04 00:54:01.257383 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-04 00:54:01.257394 | orchestrator | Thursday 04 September 2025 00:50:22 +0000 (0:00:00.900) 0:02:46.331 **** 2025-09-04 00:54:01.257407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 00:54:01.257421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 00:54:01.257477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 00:54:01.257515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257550 | orchestrator | 2025-09-04 00:54:01.257561 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-04 00:54:01.257571 | orchestrator | Thursday 04 September 2025 00:50:26 +0000 (0:00:04.057) 0:02:50.388 **** 2025-09-04 00:54:01.257585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 00:54:01.257596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257617 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.257627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 00:54:01.257648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257668 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.257685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 00:54:01.257697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 00:54:01.257707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 00:54:01.257717 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.257727 | orchestrator | 2025-09-04 00:54:01.257736 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-04 00:54:01.257753 | orchestrator | Thursday 04 September 2025 00:50:27 +0000 (0:00:00.861) 0:02:51.249 **** 2025-09-04 00:54:01.257763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257784 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.257798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257819 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.257829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-04 00:54:01.257848 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.257857 | orchestrator | 2025-09-04 00:54:01.257871 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-04 00:54:01.257881 | orchestrator | Thursday 04 September 2025 00:50:28 +0000 (0:00:00.754) 0:02:52.004 **** 2025-09-04 00:54:01.257890 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.257900 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.257909 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.257919 | orchestrator | 2025-09-04 00:54:01.257928 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-04 00:54:01.257938 | orchestrator | Thursday 04 September 2025 00:50:29 +0000 (0:00:01.236) 0:02:53.240 **** 2025-09-04 00:54:01.257947 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.257957 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.257966 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.257976 | orchestrator | 2025-09-04 00:54:01.257985 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-04 00:54:01.257994 | orchestrator | Thursday 04 September 2025 00:50:31 +0000 (0:00:02.144) 0:02:55.384 **** 2025-09-04 00:54:01.258004 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.258013 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.258063 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.258073 | orchestrator | 2025-09-04 00:54:01.258082 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-04 00:54:01.258091 | orchestrator | Thursday 04 September 2025 00:50:32 +0000 (0:00:00.572) 0:02:55.958 **** 2025-09-04 00:54:01.258101 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.258110 | orchestrator | 2025-09-04 00:54:01.258120 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-04 00:54:01.258129 | orchestrator | Thursday 04 September 2025 00:50:33 +0000 (0:00:00.990) 0:02:56.948 **** 2025-09-04 00:54:01.258148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 00:54:01.258160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 00:54:01.258227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 00:54:01.258275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258285 | orchestrator | 2025-09-04 00:54:01.258294 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-04 00:54:01.258304 | orchestrator | Thursday 04 September 2025 00:50:36 +0000 (0:00:03.507) 0:03:00.456 **** 2025-09-04 00:54:01.258314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 00:54:01.258331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258341 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.258355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 00:54:01.258366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258381 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.258391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 00:54:01.258402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258412 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.258422 | orchestrator | 2025-09-04 00:54:01.258436 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-04 00:54:01.258446 | orchestrator | Thursday 04 September 2025 00:50:37 +0000 (0:00:00.953) 0:03:01.409 **** 2025-09-04 00:54:01.258456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258474 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.258482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258498 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.258506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-04 00:54:01.258526 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.258534 | orchestrator | 2025-09-04 00:54:01.258554 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-04 00:54:01.258563 | orchestrator | Thursday 04 September 2025 00:50:38 +0000 (0:00:00.925) 0:03:02.334 **** 2025-09-04 00:54:01.258570 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.258578 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.258586 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.258594 | orchestrator | 2025-09-04 00:54:01.258602 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-04 00:54:01.258609 | orchestrator | Thursday 04 September 2025 00:50:39 +0000 (0:00:01.342) 0:03:03.677 **** 2025-09-04 00:54:01.258617 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.258625 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.258633 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.258640 | orchestrator | 2025-09-04 00:54:01.258648 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-04 00:54:01.258656 | orchestrator | Thursday 04 September 2025 00:50:42 +0000 (0:00:02.107) 0:03:05.784 **** 2025-09-04 00:54:01.258664 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.258671 | orchestrator | 2025-09-04 00:54:01.258679 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-04 00:54:01.258687 | orchestrator | Thursday 04 September 2025 00:50:43 +0000 (0:00:01.278) 0:03:07.062 **** 2025-09-04 00:54:01.258695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-04 00:54:01.258704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-04 00:54:01.258751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-04 00:54:01.258792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258822 | orchestrator | 2025-09-04 00:54:01.258830 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-04 00:54:01.258838 | orchestrator | Thursday 04 September 2025 00:50:47 +0000 (0:00:04.285) 0:03:11.348 **** 2025-09-04 00:54:01.258846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-04 00:54:01.258855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258894 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.258902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-04 00:54:01.258911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258936 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.258948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-04 00:54:01.258964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.258989 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.258997 | orchestrator | 2025-09-04 00:54:01.259005 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-04 00:54:01.259013 | orchestrator | Thursday 04 September 2025 00:50:48 +0000 (0:00:00.676) 0:03:12.025 **** 2025-09-04 00:54:01.259021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259037 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259061 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-04 00:54:01.259089 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259097 | orchestrator | 2025-09-04 00:54:01.259105 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-04 00:54:01.259113 | orchestrator | Thursday 04 September 2025 00:50:49 +0000 (0:00:01.546) 0:03:13.571 **** 2025-09-04 00:54:01.259125 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.259133 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.259141 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.259148 | orchestrator | 2025-09-04 00:54:01.259156 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-04 00:54:01.259164 | orchestrator | Thursday 04 September 2025 00:50:51 +0000 (0:00:01.502) 0:03:15.073 **** 2025-09-04 00:54:01.259185 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.259193 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.259201 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.259208 | orchestrator | 2025-09-04 00:54:01.259216 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-04 00:54:01.259224 | orchestrator | Thursday 04 September 2025 00:50:53 +0000 (0:00:02.140) 0:03:17.214 **** 2025-09-04 00:54:01.259232 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.259239 | orchestrator | 2025-09-04 00:54:01.259247 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-04 00:54:01.259255 | orchestrator | Thursday 04 September 2025 00:50:54 +0000 (0:00:01.334) 0:03:18.548 **** 2025-09-04 00:54:01.259263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-04 00:54:01.259271 | orchestrator | 2025-09-04 00:54:01.259278 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-04 00:54:01.259286 | orchestrator | Thursday 04 September 2025 00:50:57 +0000 (0:00:02.839) 0:03:21.388 **** 2025-09-04 00:54:01.259299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259321 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259357 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259392 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259400 | orchestrator | 2025-09-04 00:54:01.259408 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-04 00:54:01.259416 | orchestrator | Thursday 04 September 2025 00:50:59 +0000 (0:00:02.151) 0:03:23.540 **** 2025-09-04 00:54:01.259428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259449 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259484 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:54:01.259506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-04 00:54:01.259514 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259521 | orchestrator | 2025-09-04 00:54:01.259529 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-04 00:54:01.259537 | orchestrator | Thursday 04 September 2025 00:51:02 +0000 (0:00:02.348) 0:03:25.888 **** 2025-09-04 00:54:01.259549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259566 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259595 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-04 00:54:01.259624 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259632 | orchestrator | 2025-09-04 00:54:01.259639 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-04 00:54:01.259647 | orchestrator | Thursday 04 September 2025 00:51:05 +0000 (0:00:02.906) 0:03:28.795 **** 2025-09-04 00:54:01.259655 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.259663 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.259670 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.259678 | orchestrator | 2025-09-04 00:54:01.259686 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-04 00:54:01.259693 | orchestrator | Thursday 04 September 2025 00:51:06 +0000 (0:00:01.903) 0:03:30.699 **** 2025-09-04 00:54:01.259701 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259709 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259716 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259724 | orchestrator | 2025-09-04 00:54:01.259732 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-04 00:54:01.259740 | orchestrator | Thursday 04 September 2025 00:51:08 +0000 (0:00:01.462) 0:03:32.162 **** 2025-09-04 00:54:01.259751 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259760 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259767 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259775 | orchestrator | 2025-09-04 00:54:01.259782 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-04 00:54:01.259790 | orchestrator | Thursday 04 September 2025 00:51:08 +0000 (0:00:00.313) 0:03:32.475 **** 2025-09-04 00:54:01.259798 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.259806 | orchestrator | 2025-09-04 00:54:01.259814 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-04 00:54:01.259821 | orchestrator | Thursday 04 September 2025 00:51:10 +0000 (0:00:01.331) 0:03:33.806 **** 2025-09-04 00:54:01.259833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-04 00:54:01.259849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-04 00:54:01.259857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-04 00:54:01.259866 | orchestrator | 2025-09-04 00:54:01.259874 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-04 00:54:01.259881 | orchestrator | Thursday 04 September 2025 00:51:11 +0000 (0:00:01.504) 0:03:35.311 **** 2025-09-04 00:54:01.259890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-04 00:54:01.259898 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-04 00:54:01.259919 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.259930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-04 00:54:01.259943 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.259951 | orchestrator | 2025-09-04 00:54:01.259959 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-04 00:54:01.259966 | orchestrator | Thursday 04 September 2025 00:51:11 +0000 (0:00:00.373) 0:03:35.685 **** 2025-09-04 00:54:01.259974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-04 00:54:01.259982 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.259991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-04 00:54:01.259999 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.260007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-04 00:54:01.260015 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.260022 | orchestrator | 2025-09-04 00:54:01.260030 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-04 00:54:01.260038 | orchestrator | Thursday 04 September 2025 00:51:12 +0000 (0:00:00.865) 0:03:36.551 **** 2025-09-04 00:54:01.260046 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.260053 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.260061 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.260069 | orchestrator | 2025-09-04 00:54:01.260076 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-04 00:54:01.260084 | orchestrator | Thursday 04 September 2025 00:51:13 +0000 (0:00:00.466) 0:03:37.017 **** 2025-09-04 00:54:01.260092 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.260099 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.260107 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.260115 | orchestrator | 2025-09-04 00:54:01.260123 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-04 00:54:01.260130 | orchestrator | Thursday 04 September 2025 00:51:14 +0000 (0:00:01.261) 0:03:38.279 **** 2025-09-04 00:54:01.260138 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.260146 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.260154 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.260161 | orchestrator | 2025-09-04 00:54:01.260169 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-04 00:54:01.260188 | orchestrator | Thursday 04 September 2025 00:51:14 +0000 (0:00:00.304) 0:03:38.584 **** 2025-09-04 00:54:01.260196 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.260204 | orchestrator | 2025-09-04 00:54:01.260211 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-04 00:54:01.260219 | orchestrator | Thursday 04 September 2025 00:51:16 +0000 (0:00:01.500) 0:03:40.085 **** 2025-09-04 00:54:01.260232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 00:54:01.260249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.260283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.260338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 00:54:01.260478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.260597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.260623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.260631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.260734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.260830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.260843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 00:54:01.260851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.260947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.260972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.260981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.261096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261104 | orchestrator | 2025-09-04 00:54:01.261112 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-04 00:54:01.261120 | orchestrator | Thursday 04 September 2025 00:51:20 +0000 (0:00:04.503) 0:03:44.589 **** 2025-09-04 00:54:01.261212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 00:54:01.261231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.261270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 00:54:01.261344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.261469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 00:54:01.261567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.261690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261767 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.261774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-04 00:54:01.261795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.261900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261919 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.261927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.261959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-04 00:54:01.261984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.261995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-04 00:54:01.262002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-04 00:54:01.262009 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.262034 | orchestrator | 2025-09-04 00:54:01.262042 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-04 00:54:01.262049 | orchestrator | Thursday 04 September 2025 00:51:22 +0000 (0:00:01.480) 0:03:46.069 **** 2025-09-04 00:54:01.262056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262070 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.262094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262108 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.262115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-04 00:54:01.262133 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.262140 | orchestrator | 2025-09-04 00:54:01.262146 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-04 00:54:01.262153 | orchestrator | Thursday 04 September 2025 00:51:24 +0000 (0:00:02.002) 0:03:48.071 **** 2025-09-04 00:54:01.262163 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.262169 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.262188 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.262195 | orchestrator | 2025-09-04 00:54:01.262202 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-04 00:54:01.262208 | orchestrator | Thursday 04 September 2025 00:51:25 +0000 (0:00:01.256) 0:03:49.328 **** 2025-09-04 00:54:01.262215 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.262222 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.262228 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.262235 | orchestrator | 2025-09-04 00:54:01.262241 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-04 00:54:01.262248 | orchestrator | Thursday 04 September 2025 00:51:27 +0000 (0:00:01.975) 0:03:51.304 **** 2025-09-04 00:54:01.262255 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.262261 | orchestrator | 2025-09-04 00:54:01.262268 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-04 00:54:01.262274 | orchestrator | Thursday 04 September 2025 00:51:28 +0000 (0:00:01.208) 0:03:52.512 **** 2025-09-04 00:54:01.262281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262325 | orchestrator | 2025-09-04 00:54:01.262331 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-04 00:54:01.262338 | orchestrator | Thursday 04 September 2025 00:51:32 +0000 (0:00:03.651) 0:03:56.163 **** 2025-09-04 00:54:01.262348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262355 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.262362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262369 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.262376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262383 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.262389 | orchestrator | 2025-09-04 00:54:01.262397 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-04 00:54:01.262405 | orchestrator | Thursday 04 September 2025 00:51:32 +0000 (0:00:00.484) 0:03:56.648 **** 2025-09-04 00:54:01.262412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262433 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.262456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262473 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.262481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262497 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.262505 | orchestrator | 2025-09-04 00:54:01.262513 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-04 00:54:01.262524 | orchestrator | Thursday 04 September 2025 00:51:33 +0000 (0:00:00.797) 0:03:57.446 **** 2025-09-04 00:54:01.262532 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.262540 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.262548 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.262555 | orchestrator | 2025-09-04 00:54:01.262563 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-04 00:54:01.262571 | orchestrator | Thursday 04 September 2025 00:51:35 +0000 (0:00:01.354) 0:03:58.801 **** 2025-09-04 00:54:01.262579 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.262587 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.262595 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.262603 | orchestrator | 2025-09-04 00:54:01.262611 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-04 00:54:01.262619 | orchestrator | Thursday 04 September 2025 00:51:37 +0000 (0:00:02.168) 0:04:00.969 **** 2025-09-04 00:54:01.262627 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.262635 | orchestrator | 2025-09-04 00:54:01.262643 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-04 00:54:01.262651 | orchestrator | Thursday 04 September 2025 00:51:38 +0000 (0:00:01.495) 0:04:02.464 **** 2025-09-04 00:54:01.262660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.262730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262780 | orchestrator | 2025-09-04 00:54:01.262786 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-04 00:54:01.262798 | orchestrator | Thursday 04 September 2025 00:51:42 +0000 (0:00:04.044) 0:04:06.509 **** 2025-09-04 00:54:01.262806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262831 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.262853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262878 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.262886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.262897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.262927 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.262933 | orchestrator | 2025-09-04 00:54:01.262940 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-04 00:54:01.262947 | orchestrator | Thursday 04 September 2025 00:51:43 +0000 (0:00:01.220) 0:04:07.730 **** 2025-09-04 00:54:01.262953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262984 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.262991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.262998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263022 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-04 00:54:01.263056 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263062 | orchestrator | 2025-09-04 00:54:01.263069 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-04 00:54:01.263076 | orchestrator | Thursday 04 September 2025 00:51:44 +0000 (0:00:00.934) 0:04:08.664 **** 2025-09-04 00:54:01.263082 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.263089 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.263095 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.263102 | orchestrator | 2025-09-04 00:54:01.263108 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-04 00:54:01.263115 | orchestrator | Thursday 04 September 2025 00:51:46 +0000 (0:00:01.305) 0:04:09.970 **** 2025-09-04 00:54:01.263122 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.263128 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.263135 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.263141 | orchestrator | 2025-09-04 00:54:01.263148 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-04 00:54:01.263154 | orchestrator | Thursday 04 September 2025 00:51:48 +0000 (0:00:02.028) 0:04:11.998 **** 2025-09-04 00:54:01.263161 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.263167 | orchestrator | 2025-09-04 00:54:01.263185 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-04 00:54:01.263207 | orchestrator | Thursday 04 September 2025 00:51:49 +0000 (0:00:01.563) 0:04:13.562 **** 2025-09-04 00:54:01.263215 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-04 00:54:01.263221 | orchestrator | 2025-09-04 00:54:01.263228 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-04 00:54:01.263235 | orchestrator | Thursday 04 September 2025 00:51:50 +0000 (0:00:00.835) 0:04:14.397 **** 2025-09-04 00:54:01.263242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-04 00:54:01.263252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-04 00:54:01.263263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-04 00:54:01.263270 | orchestrator | 2025-09-04 00:54:01.263277 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-04 00:54:01.263283 | orchestrator | Thursday 04 September 2025 00:51:54 +0000 (0:00:04.201) 0:04:18.599 **** 2025-09-04 00:54:01.263290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263297 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263311 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263325 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263331 | orchestrator | 2025-09-04 00:54:01.263338 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-04 00:54:01.263344 | orchestrator | Thursday 04 September 2025 00:51:56 +0000 (0:00:01.422) 0:04:20.021 **** 2025-09-04 00:54:01.263365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263380 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263405 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-04 00:54:01.263428 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263435 | orchestrator | 2025-09-04 00:54:01.263442 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-04 00:54:01.263448 | orchestrator | Thursday 04 September 2025 00:51:57 +0000 (0:00:01.564) 0:04:21.586 **** 2025-09-04 00:54:01.263455 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.263461 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.263468 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.263474 | orchestrator | 2025-09-04 00:54:01.263481 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-04 00:54:01.263488 | orchestrator | Thursday 04 September 2025 00:52:00 +0000 (0:00:02.448) 0:04:24.035 **** 2025-09-04 00:54:01.263494 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.263501 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.263508 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.263514 | orchestrator | 2025-09-04 00:54:01.263521 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-04 00:54:01.263527 | orchestrator | Thursday 04 September 2025 00:52:03 +0000 (0:00:03.243) 0:04:27.278 **** 2025-09-04 00:54:01.263534 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-04 00:54:01.263541 | orchestrator | 2025-09-04 00:54:01.263547 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-04 00:54:01.263554 | orchestrator | Thursday 04 September 2025 00:52:05 +0000 (0:00:01.501) 0:04:28.779 **** 2025-09-04 00:54:01.263561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263568 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263582 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263615 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263622 | orchestrator | 2025-09-04 00:54:01.263628 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-04 00:54:01.263635 | orchestrator | Thursday 04 September 2025 00:52:06 +0000 (0:00:01.325) 0:04:30.105 **** 2025-09-04 00:54:01.263642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263649 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263666 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-04 00:54:01.263679 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263686 | orchestrator | 2025-09-04 00:54:01.263693 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-04 00:54:01.263699 | orchestrator | Thursday 04 September 2025 00:52:07 +0000 (0:00:01.259) 0:04:31.364 **** 2025-09-04 00:54:01.263706 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263712 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263719 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263725 | orchestrator | 2025-09-04 00:54:01.263732 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-04 00:54:01.263738 | orchestrator | Thursday 04 September 2025 00:52:09 +0000 (0:00:01.919) 0:04:33.284 **** 2025-09-04 00:54:01.263745 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.263752 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.263759 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.263765 | orchestrator | 2025-09-04 00:54:01.263772 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-04 00:54:01.263778 | orchestrator | Thursday 04 September 2025 00:52:11 +0000 (0:00:02.189) 0:04:35.473 **** 2025-09-04 00:54:01.263785 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.263791 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.263798 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.263805 | orchestrator | 2025-09-04 00:54:01.263811 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-04 00:54:01.263818 | orchestrator | Thursday 04 September 2025 00:52:14 +0000 (0:00:03.019) 0:04:38.493 **** 2025-09-04 00:54:01.263825 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-04 00:54:01.263835 | orchestrator | 2025-09-04 00:54:01.263842 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-04 00:54:01.263849 | orchestrator | Thursday 04 September 2025 00:52:15 +0000 (0:00:00.859) 0:04:39.353 **** 2025-09-04 00:54:01.263856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263863 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263892 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263906 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263912 | orchestrator | 2025-09-04 00:54:01.263919 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-04 00:54:01.263928 | orchestrator | Thursday 04 September 2025 00:52:16 +0000 (0:00:01.332) 0:04:40.685 **** 2025-09-04 00:54:01.263936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263942 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.263949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263956 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.263963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-04 00:54:01.263974 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.263981 | orchestrator | 2025-09-04 00:54:01.263987 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-04 00:54:01.263994 | orchestrator | Thursday 04 September 2025 00:52:18 +0000 (0:00:01.307) 0:04:41.992 **** 2025-09-04 00:54:01.264001 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.264007 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.264014 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.264020 | orchestrator | 2025-09-04 00:54:01.264027 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-04 00:54:01.264033 | orchestrator | Thursday 04 September 2025 00:52:19 +0000 (0:00:01.565) 0:04:43.558 **** 2025-09-04 00:54:01.264040 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.264047 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.264053 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.264060 | orchestrator | 2025-09-04 00:54:01.264066 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-04 00:54:01.264073 | orchestrator | Thursday 04 September 2025 00:52:22 +0000 (0:00:02.333) 0:04:45.891 **** 2025-09-04 00:54:01.264079 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.264086 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.264092 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.264099 | orchestrator | 2025-09-04 00:54:01.264106 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-04 00:54:01.264112 | orchestrator | Thursday 04 September 2025 00:52:25 +0000 (0:00:03.158) 0:04:49.050 **** 2025-09-04 00:54:01.264119 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.264125 | orchestrator | 2025-09-04 00:54:01.264132 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-04 00:54:01.264138 | orchestrator | Thursday 04 September 2025 00:52:26 +0000 (0:00:01.568) 0:04:50.618 **** 2025-09-04 00:54:01.264160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.264205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.264241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.264296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264362 | orchestrator | 2025-09-04 00:54:01.264369 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-04 00:54:01.264375 | orchestrator | Thursday 04 September 2025 00:52:30 +0000 (0:00:03.337) 0:04:53.956 **** 2025-09-04 00:54:01.264383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.264390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264433 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.264443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.264455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264497 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.264504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.264518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 00:54:01.264525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 00:54:01.264539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 00:54:01.264546 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.264553 | orchestrator | 2025-09-04 00:54:01.264559 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-04 00:54:01.264566 | orchestrator | Thursday 04 September 2025 00:52:30 +0000 (0:00:00.738) 0:04:54.695 **** 2025-09-04 00:54:01.264573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264587 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.264608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264622 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.264629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-04 00:54:01.264647 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.264653 | orchestrator | 2025-09-04 00:54:01.264660 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-04 00:54:01.264667 | orchestrator | Thursday 04 September 2025 00:52:32 +0000 (0:00:01.477) 0:04:56.172 **** 2025-09-04 00:54:01.264676 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.264683 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.264689 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.264696 | orchestrator | 2025-09-04 00:54:01.264702 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-04 00:54:01.264709 | orchestrator | Thursday 04 September 2025 00:52:33 +0000 (0:00:01.396) 0:04:57.569 **** 2025-09-04 00:54:01.264715 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.264722 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.264729 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.264735 | orchestrator | 2025-09-04 00:54:01.264741 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-04 00:54:01.264747 | orchestrator | Thursday 04 September 2025 00:52:35 +0000 (0:00:02.148) 0:04:59.718 **** 2025-09-04 00:54:01.264753 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.264759 | orchestrator | 2025-09-04 00:54:01.264765 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-04 00:54:01.264771 | orchestrator | Thursday 04 September 2025 00:52:37 +0000 (0:00:01.379) 0:05:01.097 **** 2025-09-04 00:54:01.264778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:54:01.264785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:54:01.264805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:54:01.264819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:54:01.264827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:54:01.264834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:54:01.264841 | orchestrator | 2025-09-04 00:54:01.264847 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-04 00:54:01.264854 | orchestrator | Thursday 04 September 2025 00:52:43 +0000 (0:00:05.681) 0:05:06.779 **** 2025-09-04 00:54:01.264874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:54:01.264889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:54:01.264896 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.264903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:54:01.264910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:54:01.264916 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.264936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:54:01.264950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:54:01.264957 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.264963 | orchestrator | 2025-09-04 00:54:01.264969 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-04 00:54:01.264976 | orchestrator | Thursday 04 September 2025 00:52:43 +0000 (0:00:00.634) 0:05:07.413 **** 2025-09-04 00:54:01.264982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-04 00:54:01.264988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.264995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.265001 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-04 00:54:01.265013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.265020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.265026 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-04 00:54:01.265043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.265049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-04 00:54:01.265056 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265062 | orchestrator | 2025-09-04 00:54:01.265068 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-04 00:54:01.265074 | orchestrator | Thursday 04 September 2025 00:52:44 +0000 (0:00:00.903) 0:05:08.317 **** 2025-09-04 00:54:01.265080 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265086 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265092 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265098 | orchestrator | 2025-09-04 00:54:01.265104 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-04 00:54:01.265110 | orchestrator | Thursday 04 September 2025 00:52:45 +0000 (0:00:00.801) 0:05:09.119 **** 2025-09-04 00:54:01.265116 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265123 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265129 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265135 | orchestrator | 2025-09-04 00:54:01.265154 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-04 00:54:01.265161 | orchestrator | Thursday 04 September 2025 00:52:46 +0000 (0:00:01.432) 0:05:10.551 **** 2025-09-04 00:54:01.265167 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.265184 | orchestrator | 2025-09-04 00:54:01.265190 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-04 00:54:01.265196 | orchestrator | Thursday 04 September 2025 00:52:48 +0000 (0:00:01.365) 0:05:11.917 **** 2025-09-04 00:54:01.265206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 00:54:01.265213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 00:54:01.265267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 00:54:01.265301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 00:54:01.265348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 00:54:01.265392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 00:54:01.265418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265467 | orchestrator | 2025-09-04 00:54:01.265473 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-04 00:54:01.265479 | orchestrator | Thursday 04 September 2025 00:52:52 +0000 (0:00:04.457) 0:05:16.375 **** 2025-09-04 00:54:01.265485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-04 00:54:01.265492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-04 00:54:01.265540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-04 00:54:01.265564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265580 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-04 00:54:01.265653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-04 00:54:01.265687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 00:54:01.265704 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-04 00:54:01.265746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-04 00:54:01.265753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 00:54:01.265769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 00:54:01.265779 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265786 | orchestrator | 2025-09-04 00:54:01.265792 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-04 00:54:01.265798 | orchestrator | Thursday 04 September 2025 00:52:53 +0000 (0:00:01.221) 0:05:17.597 **** 2025-09-04 00:54:01.265807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265834 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265866 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-04 00:54:01.265885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-04 00:54:01.265905 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265911 | orchestrator | 2025-09-04 00:54:01.265917 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-04 00:54:01.265923 | orchestrator | Thursday 04 September 2025 00:52:54 +0000 (0:00:01.020) 0:05:18.617 **** 2025-09-04 00:54:01.265930 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265936 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265942 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265948 | orchestrator | 2025-09-04 00:54:01.265954 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-04 00:54:01.265960 | orchestrator | Thursday 04 September 2025 00:52:55 +0000 (0:00:00.450) 0:05:19.068 **** 2025-09-04 00:54:01.265966 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.265972 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.265978 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.265985 | orchestrator | 2025-09-04 00:54:01.265991 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-04 00:54:01.265997 | orchestrator | Thursday 04 September 2025 00:52:56 +0000 (0:00:01.429) 0:05:20.497 **** 2025-09-04 00:54:01.266003 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.266009 | orchestrator | 2025-09-04 00:54:01.266031 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-04 00:54:01.266040 | orchestrator | Thursday 04 September 2025 00:52:58 +0000 (0:00:01.728) 0:05:22.226 **** 2025-09-04 00:54:01.266048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:54:01.266055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:54:01.266062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-04 00:54:01.266073 | orchestrator | 2025-09-04 00:54:01.266082 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-04 00:54:01.266089 | orchestrator | Thursday 04 September 2025 00:53:00 +0000 (0:00:02.506) 0:05:24.733 **** 2025-09-04 00:54:01.266098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-04 00:54:01.266105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-04 00:54:01.266112 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266118 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-04 00:54:01.266131 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266141 | orchestrator | 2025-09-04 00:54:01.266147 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-04 00:54:01.266153 | orchestrator | Thursday 04 September 2025 00:53:01 +0000 (0:00:00.399) 0:05:25.133 **** 2025-09-04 00:54:01.266159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-04 00:54:01.266165 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-04 00:54:01.266208 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-04 00:54:01.266221 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266227 | orchestrator | 2025-09-04 00:54:01.266233 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-04 00:54:01.266239 | orchestrator | Thursday 04 September 2025 00:53:02 +0000 (0:00:00.998) 0:05:26.131 **** 2025-09-04 00:54:01.266249 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266255 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266262 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266268 | orchestrator | 2025-09-04 00:54:01.266274 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-04 00:54:01.266280 | orchestrator | Thursday 04 September 2025 00:53:02 +0000 (0:00:00.444) 0:05:26.576 **** 2025-09-04 00:54:01.266286 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266292 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266298 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266304 | orchestrator | 2025-09-04 00:54:01.266310 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-04 00:54:01.266317 | orchestrator | Thursday 04 September 2025 00:53:04 +0000 (0:00:01.391) 0:05:27.967 **** 2025-09-04 00:54:01.266323 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:54:01.266329 | orchestrator | 2025-09-04 00:54:01.266335 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-04 00:54:01.266341 | orchestrator | Thursday 04 September 2025 00:53:05 +0000 (0:00:01.793) 0:05:29.761 **** 2025-09-04 00:54:01.266350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-04 00:54:01.266407 | orchestrator | 2025-09-04 00:54:01.266414 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-04 00:54:01.266420 | orchestrator | Thursday 04 September 2025 00:53:12 +0000 (0:00:06.191) 0:05:35.953 **** 2025-09-04 00:54:01.266426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266443 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266468 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-04 00:54:01.266488 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266494 | orchestrator | 2025-09-04 00:54:01.266500 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-04 00:54:01.266509 | orchestrator | Thursday 04 September 2025 00:53:12 +0000 (0:00:00.637) 0:05:36.590 **** 2025-09-04 00:54:01.266516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266541 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266579 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-04 00:54:01.266610 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266616 | orchestrator | 2025-09-04 00:54:01.266622 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-04 00:54:01.266628 | orchestrator | Thursday 04 September 2025 00:53:14 +0000 (0:00:01.651) 0:05:38.242 **** 2025-09-04 00:54:01.266635 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.266641 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.266647 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.266653 | orchestrator | 2025-09-04 00:54:01.266659 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-04 00:54:01.266665 | orchestrator | Thursday 04 September 2025 00:53:15 +0000 (0:00:01.398) 0:05:39.640 **** 2025-09-04 00:54:01.266671 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.266677 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.266683 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.266689 | orchestrator | 2025-09-04 00:54:01.266696 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-04 00:54:01.266702 | orchestrator | Thursday 04 September 2025 00:53:18 +0000 (0:00:02.255) 0:05:41.895 **** 2025-09-04 00:54:01.266708 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266714 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266720 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266726 | orchestrator | 2025-09-04 00:54:01.266732 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-04 00:54:01.266737 | orchestrator | Thursday 04 September 2025 00:53:18 +0000 (0:00:00.320) 0:05:42.216 **** 2025-09-04 00:54:01.266743 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266748 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266753 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266759 | orchestrator | 2025-09-04 00:54:01.266764 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-04 00:54:01.266772 | orchestrator | Thursday 04 September 2025 00:53:18 +0000 (0:00:00.310) 0:05:42.526 **** 2025-09-04 00:54:01.266777 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266783 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266788 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266793 | orchestrator | 2025-09-04 00:54:01.266799 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-04 00:54:01.266804 | orchestrator | Thursday 04 September 2025 00:53:19 +0000 (0:00:00.605) 0:05:43.132 **** 2025-09-04 00:54:01.266813 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266818 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266823 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266829 | orchestrator | 2025-09-04 00:54:01.266834 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-04 00:54:01.266839 | orchestrator | Thursday 04 September 2025 00:53:19 +0000 (0:00:00.324) 0:05:43.457 **** 2025-09-04 00:54:01.266845 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266850 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266855 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266860 | orchestrator | 2025-09-04 00:54:01.266866 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-04 00:54:01.266871 | orchestrator | Thursday 04 September 2025 00:53:19 +0000 (0:00:00.308) 0:05:43.766 **** 2025-09-04 00:54:01.266876 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.266882 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.266887 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.266892 | orchestrator | 2025-09-04 00:54:01.266900 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-04 00:54:01.266905 | orchestrator | Thursday 04 September 2025 00:53:20 +0000 (0:00:00.819) 0:05:44.586 **** 2025-09-04 00:54:01.266911 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.266916 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.266922 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.266927 | orchestrator | 2025-09-04 00:54:01.266932 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-04 00:54:01.266938 | orchestrator | Thursday 04 September 2025 00:53:21 +0000 (0:00:00.743) 0:05:45.330 **** 2025-09-04 00:54:01.266943 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.266948 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.266953 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.266959 | orchestrator | 2025-09-04 00:54:01.266964 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-04 00:54:01.266970 | orchestrator | Thursday 04 September 2025 00:53:21 +0000 (0:00:00.356) 0:05:45.686 **** 2025-09-04 00:54:01.266975 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.266980 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.266985 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.266991 | orchestrator | 2025-09-04 00:54:01.266996 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-04 00:54:01.267001 | orchestrator | Thursday 04 September 2025 00:53:22 +0000 (0:00:00.925) 0:05:46.612 **** 2025-09-04 00:54:01.267007 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267012 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267017 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267023 | orchestrator | 2025-09-04 00:54:01.267028 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-04 00:54:01.267033 | orchestrator | Thursday 04 September 2025 00:53:24 +0000 (0:00:01.304) 0:05:47.916 **** 2025-09-04 00:54:01.267039 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267044 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267049 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267055 | orchestrator | 2025-09-04 00:54:01.267060 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-04 00:54:01.267065 | orchestrator | Thursday 04 September 2025 00:53:25 +0000 (0:00:00.899) 0:05:48.816 **** 2025-09-04 00:54:01.267071 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.267076 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.267082 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.267087 | orchestrator | 2025-09-04 00:54:01.267092 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-04 00:54:01.267098 | orchestrator | Thursday 04 September 2025 00:53:29 +0000 (0:00:04.595) 0:05:53.412 **** 2025-09-04 00:54:01.267103 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267108 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267117 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267122 | orchestrator | 2025-09-04 00:54:01.267127 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-04 00:54:01.267133 | orchestrator | Thursday 04 September 2025 00:53:33 +0000 (0:00:03.756) 0:05:57.169 **** 2025-09-04 00:54:01.267138 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.267143 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.267149 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.267154 | orchestrator | 2025-09-04 00:54:01.267159 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-04 00:54:01.267165 | orchestrator | Thursday 04 September 2025 00:53:41 +0000 (0:00:08.314) 0:06:05.483 **** 2025-09-04 00:54:01.267170 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267185 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267191 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267196 | orchestrator | 2025-09-04 00:54:01.267201 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-04 00:54:01.267207 | orchestrator | Thursday 04 September 2025 00:53:45 +0000 (0:00:04.175) 0:06:09.658 **** 2025-09-04 00:54:01.267212 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:54:01.267217 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:54:01.267223 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:54:01.267228 | orchestrator | 2025-09-04 00:54:01.267233 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-04 00:54:01.267239 | orchestrator | Thursday 04 September 2025 00:53:55 +0000 (0:00:09.404) 0:06:19.063 **** 2025-09-04 00:54:01.267244 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267250 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267255 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267260 | orchestrator | 2025-09-04 00:54:01.267266 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-04 00:54:01.267271 | orchestrator | Thursday 04 September 2025 00:53:55 +0000 (0:00:00.350) 0:06:19.414 **** 2025-09-04 00:54:01.267276 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267284 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267290 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267295 | orchestrator | 2025-09-04 00:54:01.267301 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-04 00:54:01.267306 | orchestrator | Thursday 04 September 2025 00:53:55 +0000 (0:00:00.340) 0:06:19.754 **** 2025-09-04 00:54:01.267311 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267316 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267322 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267327 | orchestrator | 2025-09-04 00:54:01.267333 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-04 00:54:01.267338 | orchestrator | Thursday 04 September 2025 00:53:56 +0000 (0:00:00.671) 0:06:20.426 **** 2025-09-04 00:54:01.267343 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267348 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267354 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267359 | orchestrator | 2025-09-04 00:54:01.267364 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-04 00:54:01.267370 | orchestrator | Thursday 04 September 2025 00:53:56 +0000 (0:00:00.333) 0:06:20.760 **** 2025-09-04 00:54:01.267375 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267380 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267386 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267391 | orchestrator | 2025-09-04 00:54:01.267396 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-04 00:54:01.267404 | orchestrator | Thursday 04 September 2025 00:53:57 +0000 (0:00:00.409) 0:06:21.169 **** 2025-09-04 00:54:01.267410 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:54:01.267415 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:54:01.267420 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:54:01.267429 | orchestrator | 2025-09-04 00:54:01.267434 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-04 00:54:01.267439 | orchestrator | Thursday 04 September 2025 00:53:57 +0000 (0:00:00.335) 0:06:21.505 **** 2025-09-04 00:54:01.267445 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267450 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267455 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267461 | orchestrator | 2025-09-04 00:54:01.267466 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-04 00:54:01.267471 | orchestrator | Thursday 04 September 2025 00:53:59 +0000 (0:00:01.348) 0:06:22.854 **** 2025-09-04 00:54:01.267477 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:54:01.267482 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:54:01.267487 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:54:01.267492 | orchestrator | 2025-09-04 00:54:01.267498 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:54:01.267503 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-04 00:54:01.267509 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-04 00:54:01.267514 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-04 00:54:01.267520 | orchestrator | 2025-09-04 00:54:01.267525 | orchestrator | 2025-09-04 00:54:01.267530 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:54:01.267536 | orchestrator | Thursday 04 September 2025 00:53:59 +0000 (0:00:00.861) 0:06:23.715 **** 2025-09-04 00:54:01.267541 | orchestrator | =============================================================================== 2025-09-04 00:54:01.267546 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.40s 2025-09-04 00:54:01.267552 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.31s 2025-09-04 00:54:01.267557 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.19s 2025-09-04 00:54:01.267562 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.13s 2025-09-04 00:54:01.267568 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.83s 2025-09-04 00:54:01.267573 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.68s 2025-09-04 00:54:01.267578 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.60s 2025-09-04 00:54:01.267584 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.52s 2025-09-04 00:54:01.267589 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.50s 2025-09-04 00:54:01.267594 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.46s 2025-09-04 00:54:01.267599 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.37s 2025-09-04 00:54:01.267605 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.29s 2025-09-04 00:54:01.267610 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.21s 2025-09-04 00:54:01.267615 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.20s 2025-09-04 00:54:01.267621 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.18s 2025-09-04 00:54:01.267626 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.06s 2025-09-04 00:54:01.267631 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.04s 2025-09-04 00:54:01.267636 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.77s 2025-09-04 00:54:01.267642 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.76s 2025-09-04 00:54:01.267647 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.75s 2025-09-04 00:54:04.274519 | orchestrator | 2025-09-04 00:54:04 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:04.275327 | orchestrator | 2025-09-04 00:54:04 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:04.276067 | orchestrator | 2025-09-04 00:54:04 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:04.276107 | orchestrator | 2025-09-04 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:07.313650 | orchestrator | 2025-09-04 00:54:07 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:07.316954 | orchestrator | 2025-09-04 00:54:07 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:07.317363 | orchestrator | 2025-09-04 00:54:07 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:07.318209 | orchestrator | 2025-09-04 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:10.351839 | orchestrator | 2025-09-04 00:54:10 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:10.351937 | orchestrator | 2025-09-04 00:54:10 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:10.354384 | orchestrator | 2025-09-04 00:54:10 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:10.354407 | orchestrator | 2025-09-04 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:13.383839 | orchestrator | 2025-09-04 00:54:13 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:13.385280 | orchestrator | 2025-09-04 00:54:13 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:13.385912 | orchestrator | 2025-09-04 00:54:13 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:13.387331 | orchestrator | 2025-09-04 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:16.427848 | orchestrator | 2025-09-04 00:54:16 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:16.428644 | orchestrator | 2025-09-04 00:54:16 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:16.429643 | orchestrator | 2025-09-04 00:54:16 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:16.429676 | orchestrator | 2025-09-04 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:19.470510 | orchestrator | 2025-09-04 00:54:19 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:19.471783 | orchestrator | 2025-09-04 00:54:19 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:19.472739 | orchestrator | 2025-09-04 00:54:19 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:19.472984 | orchestrator | 2025-09-04 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:22.591346 | orchestrator | 2025-09-04 00:54:22 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:22.593396 | orchestrator | 2025-09-04 00:54:22 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:22.597442 | orchestrator | 2025-09-04 00:54:22 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:22.597483 | orchestrator | 2025-09-04 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:25.638564 | orchestrator | 2025-09-04 00:54:25 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:25.642566 | orchestrator | 2025-09-04 00:54:25 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:25.645359 | orchestrator | 2025-09-04 00:54:25 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:25.645407 | orchestrator | 2025-09-04 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:28.683568 | orchestrator | 2025-09-04 00:54:28 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:28.684102 | orchestrator | 2025-09-04 00:54:28 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:28.685348 | orchestrator | 2025-09-04 00:54:28 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:28.685373 | orchestrator | 2025-09-04 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:31.730712 | orchestrator | 2025-09-04 00:54:31 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:31.731046 | orchestrator | 2025-09-04 00:54:31 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:31.732510 | orchestrator | 2025-09-04 00:54:31 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:31.732539 | orchestrator | 2025-09-04 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:34.770725 | orchestrator | 2025-09-04 00:54:34 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:34.776571 | orchestrator | 2025-09-04 00:54:34 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:34.780355 | orchestrator | 2025-09-04 00:54:34 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:34.780833 | orchestrator | 2025-09-04 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:37.825548 | orchestrator | 2025-09-04 00:54:37 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:37.827304 | orchestrator | 2025-09-04 00:54:37 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:37.829893 | orchestrator | 2025-09-04 00:54:37 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:37.829935 | orchestrator | 2025-09-04 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:40.870411 | orchestrator | 2025-09-04 00:54:40 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:40.871686 | orchestrator | 2025-09-04 00:54:40 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:40.874213 | orchestrator | 2025-09-04 00:54:40 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:40.874245 | orchestrator | 2025-09-04 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:43.921251 | orchestrator | 2025-09-04 00:54:43 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:43.923946 | orchestrator | 2025-09-04 00:54:43 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:43.926459 | orchestrator | 2025-09-04 00:54:43 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:43.926490 | orchestrator | 2025-09-04 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:46.970578 | orchestrator | 2025-09-04 00:54:46 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:46.972311 | orchestrator | 2025-09-04 00:54:46 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:46.974064 | orchestrator | 2025-09-04 00:54:46 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:46.974091 | orchestrator | 2025-09-04 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:50.029285 | orchestrator | 2025-09-04 00:54:50 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:50.031038 | orchestrator | 2025-09-04 00:54:50 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:50.032815 | orchestrator | 2025-09-04 00:54:50 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:50.033575 | orchestrator | 2025-09-04 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:53.074567 | orchestrator | 2025-09-04 00:54:53 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:53.077430 | orchestrator | 2025-09-04 00:54:53 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:53.080217 | orchestrator | 2025-09-04 00:54:53 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:53.080313 | orchestrator | 2025-09-04 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:56.133806 | orchestrator | 2025-09-04 00:54:56 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:56.138427 | orchestrator | 2025-09-04 00:54:56 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:56.138463 | orchestrator | 2025-09-04 00:54:56 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:56.138476 | orchestrator | 2025-09-04 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:54:59.190945 | orchestrator | 2025-09-04 00:54:59 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:54:59.191959 | orchestrator | 2025-09-04 00:54:59 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:54:59.193112 | orchestrator | 2025-09-04 00:54:59 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:54:59.193157 | orchestrator | 2025-09-04 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:02.242241 | orchestrator | 2025-09-04 00:55:02 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:02.244439 | orchestrator | 2025-09-04 00:55:02 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:02.245591 | orchestrator | 2025-09-04 00:55:02 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:02.245615 | orchestrator | 2025-09-04 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:05.293087 | orchestrator | 2025-09-04 00:55:05 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:05.295042 | orchestrator | 2025-09-04 00:55:05 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:05.296315 | orchestrator | 2025-09-04 00:55:05 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:05.296340 | orchestrator | 2025-09-04 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:08.342644 | orchestrator | 2025-09-04 00:55:08 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:08.343841 | orchestrator | 2025-09-04 00:55:08 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:08.345825 | orchestrator | 2025-09-04 00:55:08 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:08.345944 | orchestrator | 2025-09-04 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:11.392971 | orchestrator | 2025-09-04 00:55:11 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:11.395554 | orchestrator | 2025-09-04 00:55:11 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:11.397928 | orchestrator | 2025-09-04 00:55:11 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:11.398232 | orchestrator | 2025-09-04 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:14.453261 | orchestrator | 2025-09-04 00:55:14 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:14.455731 | orchestrator | 2025-09-04 00:55:14 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:14.458092 | orchestrator | 2025-09-04 00:55:14 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:14.458148 | orchestrator | 2025-09-04 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:17.496600 | orchestrator | 2025-09-04 00:55:17 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:17.499350 | orchestrator | 2025-09-04 00:55:17 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:17.500342 | orchestrator | 2025-09-04 00:55:17 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:17.500374 | orchestrator | 2025-09-04 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:20.545556 | orchestrator | 2025-09-04 00:55:20 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:20.545887 | orchestrator | 2025-09-04 00:55:20 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:20.547869 | orchestrator | 2025-09-04 00:55:20 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:20.547892 | orchestrator | 2025-09-04 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:23.600898 | orchestrator | 2025-09-04 00:55:23 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:23.602179 | orchestrator | 2025-09-04 00:55:23 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:23.603775 | orchestrator | 2025-09-04 00:55:23 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:23.603876 | orchestrator | 2025-09-04 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:26.654091 | orchestrator | 2025-09-04 00:55:26 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:26.654923 | orchestrator | 2025-09-04 00:55:26 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:26.656384 | orchestrator | 2025-09-04 00:55:26 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:26.656411 | orchestrator | 2025-09-04 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:29.712167 | orchestrator | 2025-09-04 00:55:29 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:29.715256 | orchestrator | 2025-09-04 00:55:29 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:29.717228 | orchestrator | 2025-09-04 00:55:29 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:29.717264 | orchestrator | 2025-09-04 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:32.764238 | orchestrator | 2025-09-04 00:55:32 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:32.766121 | orchestrator | 2025-09-04 00:55:32 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:32.770320 | orchestrator | 2025-09-04 00:55:32 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:32.771847 | orchestrator | 2025-09-04 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:35.812796 | orchestrator | 2025-09-04 00:55:35 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:35.815460 | orchestrator | 2025-09-04 00:55:35 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:35.818417 | orchestrator | 2025-09-04 00:55:35 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:35.818870 | orchestrator | 2025-09-04 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:38.863000 | orchestrator | 2025-09-04 00:55:38 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:38.864498 | orchestrator | 2025-09-04 00:55:38 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:38.866369 | orchestrator | 2025-09-04 00:55:38 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:38.866399 | orchestrator | 2025-09-04 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:41.912620 | orchestrator | 2025-09-04 00:55:41 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:41.913624 | orchestrator | 2025-09-04 00:55:41 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:41.915439 | orchestrator | 2025-09-04 00:55:41 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:41.915818 | orchestrator | 2025-09-04 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:44.959276 | orchestrator | 2025-09-04 00:55:44 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:44.961404 | orchestrator | 2025-09-04 00:55:44 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:44.963804 | orchestrator | 2025-09-04 00:55:44 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:44.964035 | orchestrator | 2025-09-04 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:48.014000 | orchestrator | 2025-09-04 00:55:48 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:48.016986 | orchestrator | 2025-09-04 00:55:48 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:48.019599 | orchestrator | 2025-09-04 00:55:48 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:48.019640 | orchestrator | 2025-09-04 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:51.066280 | orchestrator | 2025-09-04 00:55:51 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:51.067343 | orchestrator | 2025-09-04 00:55:51 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:51.069017 | orchestrator | 2025-09-04 00:55:51 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:51.069071 | orchestrator | 2025-09-04 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:54.120772 | orchestrator | 2025-09-04 00:55:54 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:54.122781 | orchestrator | 2025-09-04 00:55:54 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:54.125216 | orchestrator | 2025-09-04 00:55:54 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:54.125282 | orchestrator | 2025-09-04 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:55:57.169846 | orchestrator | 2025-09-04 00:55:57 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:55:57.172348 | orchestrator | 2025-09-04 00:55:57 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:55:57.174872 | orchestrator | 2025-09-04 00:55:57 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:55:57.174898 | orchestrator | 2025-09-04 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:00.222377 | orchestrator | 2025-09-04 00:56:00 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:00.225636 | orchestrator | 2025-09-04 00:56:00 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:00.227868 | orchestrator | 2025-09-04 00:56:00 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:56:00.228780 | orchestrator | 2025-09-04 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:03.279398 | orchestrator | 2025-09-04 00:56:03 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:03.279490 | orchestrator | 2025-09-04 00:56:03 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:03.280958 | orchestrator | 2025-09-04 00:56:03 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:56:03.280981 | orchestrator | 2025-09-04 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:06.333548 | orchestrator | 2025-09-04 00:56:06 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:06.333637 | orchestrator | 2025-09-04 00:56:06 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:06.333652 | orchestrator | 2025-09-04 00:56:06 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:56:06.333663 | orchestrator | 2025-09-04 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:09.384349 | orchestrator | 2025-09-04 00:56:09 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:09.387210 | orchestrator | 2025-09-04 00:56:09 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:09.390235 | orchestrator | 2025-09-04 00:56:09 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state STARTED 2025-09-04 00:56:09.391218 | orchestrator | 2025-09-04 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:12.441297 | orchestrator | 2025-09-04 00:56:12 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:12.442738 | orchestrator | 2025-09-04 00:56:12 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:12.443750 | orchestrator | 2025-09-04 00:56:12 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:12.449849 | orchestrator | 2025-09-04 00:56:12 | INFO  | Task 2455333a-1467-4e15-afeb-18d3544d41aa is in state SUCCESS 2025-09-04 00:56:12.452249 | orchestrator | 2025-09-04 00:56:12.452381 | orchestrator | 2025-09-04 00:56:12.452400 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-04 00:56:12.452411 | orchestrator | 2025-09-04 00:56:12.452420 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-04 00:56:12.452430 | orchestrator | Thursday 04 September 2025 00:44:57 +0000 (0:00:00.893) 0:00:00.893 **** 2025-09-04 00:56:12.452444 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.452494 | orchestrator | 2025-09-04 00:56:12.452566 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-04 00:56:12.452588 | orchestrator | Thursday 04 September 2025 00:44:58 +0000 (0:00:01.603) 0:00:02.496 **** 2025-09-04 00:56:12.452608 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.452626 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.452642 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.452693 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.452807 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.452829 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.452847 | orchestrator | 2025-09-04 00:56:12.452861 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-04 00:56:12.452872 | orchestrator | Thursday 04 September 2025 00:45:00 +0000 (0:00:01.906) 0:00:04.402 **** 2025-09-04 00:56:12.452884 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.452895 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.452906 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.452917 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.452927 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.452937 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.452949 | orchestrator | 2025-09-04 00:56:12.452960 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-04 00:56:12.452973 | orchestrator | Thursday 04 September 2025 00:45:01 +0000 (0:00:00.906) 0:00:05.308 **** 2025-09-04 00:56:12.453021 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453041 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453059 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453109 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453120 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453130 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453139 | orchestrator | 2025-09-04 00:56:12.453149 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-04 00:56:12.453158 | orchestrator | Thursday 04 September 2025 00:45:02 +0000 (0:00:01.054) 0:00:06.363 **** 2025-09-04 00:56:12.453167 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453176 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453186 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453195 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453204 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453213 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453222 | orchestrator | 2025-09-04 00:56:12.453244 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-04 00:56:12.453254 | orchestrator | Thursday 04 September 2025 00:45:03 +0000 (0:00:00.941) 0:00:07.305 **** 2025-09-04 00:56:12.453263 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453272 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453282 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453291 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453300 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453309 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453318 | orchestrator | 2025-09-04 00:56:12.453328 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-04 00:56:12.453337 | orchestrator | Thursday 04 September 2025 00:45:04 +0000 (0:00:00.678) 0:00:07.984 **** 2025-09-04 00:56:12.453347 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453369 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453379 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453388 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453398 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453407 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453416 | orchestrator | 2025-09-04 00:56:12.453426 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-04 00:56:12.453436 | orchestrator | Thursday 04 September 2025 00:45:06 +0000 (0:00:02.032) 0:00:10.016 **** 2025-09-04 00:56:12.453445 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.453455 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.453465 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.453474 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.453483 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.453493 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.453502 | orchestrator | 2025-09-04 00:56:12.453512 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-04 00:56:12.453521 | orchestrator | Thursday 04 September 2025 00:45:07 +0000 (0:00:01.172) 0:00:11.189 **** 2025-09-04 00:56:12.453531 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453540 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453549 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453559 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453568 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453577 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453586 | orchestrator | 2025-09-04 00:56:12.453596 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-04 00:56:12.453605 | orchestrator | Thursday 04 September 2025 00:45:08 +0000 (0:00:00.911) 0:00:12.100 **** 2025-09-04 00:56:12.453615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:56:12.453625 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.453634 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.453643 | orchestrator | 2025-09-04 00:56:12.453653 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-04 00:56:12.453662 | orchestrator | Thursday 04 September 2025 00:45:09 +0000 (0:00:00.578) 0:00:12.679 **** 2025-09-04 00:56:12.453671 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.453681 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.453690 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.453699 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.453709 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.453718 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.453727 | orchestrator | 2025-09-04 00:56:12.453749 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-04 00:56:12.453759 | orchestrator | Thursday 04 September 2025 00:45:10 +0000 (0:00:01.145) 0:00:13.825 **** 2025-09-04 00:56:12.453769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:56:12.453784 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.453803 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.453822 | orchestrator | 2025-09-04 00:56:12.453918 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-04 00:56:12.454007 | orchestrator | Thursday 04 September 2025 00:45:13 +0000 (0:00:03.264) 0:00:17.090 **** 2025-09-04 00:56:12.454285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-04 00:56:12.454307 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-04 00:56:12.454326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-04 00:56:12.454344 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.454363 | orchestrator | 2025-09-04 00:56:12.454382 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-04 00:56:12.454408 | orchestrator | Thursday 04 September 2025 00:45:14 +0000 (0:00:00.596) 0:00:17.687 **** 2025-09-04 00:56:12.454419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454451 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.454460 | orchestrator | 2025-09-04 00:56:12.454470 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-04 00:56:12.454485 | orchestrator | Thursday 04 September 2025 00:45:15 +0000 (0:00:01.774) 0:00:19.462 **** 2025-09-04 00:56:12.454497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454529 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.454538 | orchestrator | 2025-09-04 00:56:12.454548 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-04 00:56:12.454559 | orchestrator | Thursday 04 September 2025 00:45:16 +0000 (0:00:00.503) 0:00:19.966 **** 2025-09-04 00:56:12.454629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-04 00:45:11.046507', 'end': '2025-09-04 00:45:11.350728', 'delta': '0:00:00.304221', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-04 00:45:11.994372', 'end': '2025-09-04 00:45:12.283769', 'delta': '0:00:00.289397', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-04 00:45:12.974293', 'end': '2025-09-04 00:45:13.298469', 'delta': '0:00:00.324176', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.454757 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.454775 | orchestrator | 2025-09-04 00:56:12.454794 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-04 00:56:12.454847 | orchestrator | Thursday 04 September 2025 00:45:16 +0000 (0:00:00.435) 0:00:20.402 **** 2025-09-04 00:56:12.454864 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.454880 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.454897 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.454913 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.455149 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.455177 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.455188 | orchestrator | 2025-09-04 00:56:12.455198 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-04 00:56:12.455208 | orchestrator | Thursday 04 September 2025 00:45:20 +0000 (0:00:03.322) 0:00:23.725 **** 2025-09-04 00:56:12.455217 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.455227 | orchestrator | 2025-09-04 00:56:12.455237 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-04 00:56:12.455246 | orchestrator | Thursday 04 September 2025 00:45:21 +0000 (0:00:01.105) 0:00:24.830 **** 2025-09-04 00:56:12.455256 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455265 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.455275 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.455284 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.455297 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.455314 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.455331 | orchestrator | 2025-09-04 00:56:12.455347 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-04 00:56:12.455363 | orchestrator | Thursday 04 September 2025 00:45:22 +0000 (0:00:01.340) 0:00:26.171 **** 2025-09-04 00:56:12.455379 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455395 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.455411 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.455428 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.455444 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.455461 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.455477 | orchestrator | 2025-09-04 00:56:12.455493 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-04 00:56:12.455649 | orchestrator | Thursday 04 September 2025 00:45:24 +0000 (0:00:01.798) 0:00:27.970 **** 2025-09-04 00:56:12.455668 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455686 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.455702 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.455713 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.455722 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.455742 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.455751 | orchestrator | 2025-09-04 00:56:12.455761 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-04 00:56:12.455770 | orchestrator | Thursday 04 September 2025 00:45:25 +0000 (0:00:01.110) 0:00:29.080 **** 2025-09-04 00:56:12.455780 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455789 | orchestrator | 2025-09-04 00:56:12.455799 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-04 00:56:12.455808 | orchestrator | Thursday 04 September 2025 00:45:25 +0000 (0:00:00.114) 0:00:29.194 **** 2025-09-04 00:56:12.455817 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455827 | orchestrator | 2025-09-04 00:56:12.455836 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-04 00:56:12.455845 | orchestrator | Thursday 04 September 2025 00:45:25 +0000 (0:00:00.247) 0:00:29.442 **** 2025-09-04 00:56:12.455855 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455864 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.455874 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.455883 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.455892 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.455902 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.455911 | orchestrator | 2025-09-04 00:56:12.455929 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-04 00:56:12.455939 | orchestrator | Thursday 04 September 2025 00:45:26 +0000 (0:00:00.579) 0:00:30.021 **** 2025-09-04 00:56:12.455949 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.455958 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.455968 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.455977 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.455986 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.455996 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456005 | orchestrator | 2025-09-04 00:56:12.456015 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-04 00:56:12.456025 | orchestrator | Thursday 04 September 2025 00:45:27 +0000 (0:00:00.738) 0:00:30.760 **** 2025-09-04 00:56:12.456034 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456044 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.456053 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.456063 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.456090 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.456106 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456124 | orchestrator | 2025-09-04 00:56:12.456143 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-04 00:56:12.456162 | orchestrator | Thursday 04 September 2025 00:45:28 +0000 (0:00:00.832) 0:00:31.592 **** 2025-09-04 00:56:12.456180 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456199 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.456215 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.456225 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.456235 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.456244 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456254 | orchestrator | 2025-09-04 00:56:12.456263 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-04 00:56:12.456273 | orchestrator | Thursday 04 September 2025 00:45:29 +0000 (0:00:01.160) 0:00:32.752 **** 2025-09-04 00:56:12.456283 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456292 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.456302 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.456311 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.456320 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.456330 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456339 | orchestrator | 2025-09-04 00:56:12.456349 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-04 00:56:12.456366 | orchestrator | Thursday 04 September 2025 00:45:29 +0000 (0:00:00.600) 0:00:33.353 **** 2025-09-04 00:56:12.456375 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456385 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.456394 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.456404 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.456419 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.456429 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456438 | orchestrator | 2025-09-04 00:56:12.456448 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-04 00:56:12.456457 | orchestrator | Thursday 04 September 2025 00:45:30 +0000 (0:00:00.680) 0:00:34.034 **** 2025-09-04 00:56:12.456467 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456476 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.456485 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.456495 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.456504 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.456514 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.456523 | orchestrator | 2025-09-04 00:56:12.456539 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-04 00:56:12.456578 | orchestrator | Thursday 04 September 2025 00:45:31 +0000 (0:00:00.647) 0:00:34.681 **** 2025-09-04 00:56:12.456591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302', 'dm-uuid-LVM-4vjUyByvbTXE2cfhT7hlQJZ9LjnIa5lcePmGsAiEDknKBRRuXTfUfJ1EIzE0QhFk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0', 'dm-uuid-LVM-eekyZqxUzJD44xvcGvdxubJyRd3iPZ8aI462i0beb8i872Gol5wCt22vfCvj5oW0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4', 'dm-uuid-LVM-M4Gv3tPZtWstc4V6EMcB6GcGcxkJLygbCgznPhTWcNnLUwfcj96SpOUmAxnQUtyl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b', 'dm-uuid-LVM-BF4TTHSAu0Ubz13YxHfu7w3gnjdTioaP9Ve11VKbcW0EjtMkZ80KCjjDPwxmxizG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.456784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5AfiQI-1laC-THaR-1i1Z-gMhy-NdET-RnIBVH', 'scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa', 'scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.456795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ferPyo-yAsd-4A40-ipP1-yum7-Q4gw-IDZAP0', 'scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc', 'scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.456822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637', 'scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.456858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.456878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b', 'dm-uuid-LVM-RgfZvjfqrEZYQe7xidzbnkMdq2kk7SfUzDVMMUqdThVgyrunAKGu0TVv8LGAXis6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d', 'dm-uuid-LVM-4QPi7SlwXSqadW2vH7XkyFokqG8HVkr7YqDSWgMIcVKOvznkX8XTXJglZn1wCQPL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.456984 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.456994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLO88k-jG6Z-1XWr-vKrj-ZYf3-iQib-Q9fjpq', 'scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1', 'scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2SSp5E-3yAW-Tbzh-mbWS-dokf-F4dZ-pZhPAG', 'scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f', 'scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mdT39u-qveb-3l9j-3x7I-kS0Q-PhUc-MrZPOh', 'scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a', 'scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e', 'scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CdcaIX-bgqq-oQhe-4pd6-ASKm-1LJD-QCGPEZ', 'scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6', 'scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77', 'scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457398 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.457408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457535 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.457545 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.457554 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.457564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:56:12.457656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part1', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part14', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part15', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part16', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:56:12.457682 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.457690 | orchestrator | 2025-09-04 00:56:12.457698 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-04 00:56:12.457706 | orchestrator | Thursday 04 September 2025 00:45:32 +0000 (0:00:01.392) 0:00:36.074 **** 2025-09-04 00:56:12.457714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302', 'dm-uuid-LVM-4vjUyByvbTXE2cfhT7hlQJZ9LjnIa5lcePmGsAiEDknKBRRuXTfUfJ1EIzE0QhFk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0', 'dm-uuid-LVM-eekyZqxUzJD44xvcGvdxubJyRd3iPZ8aI462i0beb8i872Gol5wCt22vfCvj5oW0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4', 'dm-uuid-LVM-M4Gv3tPZtWstc4V6EMcB6GcGcxkJLygbCgznPhTWcNnLUwfcj96SpOUmAxnQUtyl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b', 'dm-uuid-LVM-BF4TTHSAu0Ubz13YxHfu7w3gnjdTioaP9Ve11VKbcW0EjtMkZ80KCjjDPwxmxizG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457830 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b', 'dm-uuid-LVM-RgfZvjfqrEZYQe7xidzbnkMdq2kk7SfUzDVMMUqdThVgyrunAKGu0TVv8LGAXis6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d', 'dm-uuid-LVM-4QPi7SlwXSqadW2vH7XkyFokqG8HVkr7YqDSWgMIcVKOvznkX8XTXJglZn1wCQPL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.457995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5AfiQI-1laC-THaR-1i1Z-gMhy-NdET-RnIBVH', 'scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa', 'scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458061 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ferPyo-yAsd-4A40-ipP1-yum7-Q4gw-IDZAP0', 'scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc', 'scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637', 'scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2SSp5E-3yAW-Tbzh-mbWS-dokf-F4dZ-pZhPAG', 'scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f', 'scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458272 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.458280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458300 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458314 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CdcaIX-bgqq-oQhe-4pd6-ASKm-1LJD-QCGPEZ', 'scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6', 'scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458388 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458397 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458406 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458417 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458441 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458450 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part1', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part14', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part15', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part16', 'scsi-SQEMU_QEMU_HARDDISK_2f348311-5206-43c3-98ff-251796244150-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458484 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458500 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.458509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77', 'scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458563 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.458578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458604 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458618 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0f9fbc0-3dcb-459a-ad8b-9f15e0856598-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLO88k-jG6Z-1XWr-vKrj-ZYf3-iQib-Q9fjpq', 'scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1', 'scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458649 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458703 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.458713 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458721 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458736 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mdT39u-qveb-3l9j-3x7I-kS0Q-PhUc-MrZPOh', 'scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a', 'scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458750 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e', 'scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458771 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.458779 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458788 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458802 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458810 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458828 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part1', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part14', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part15', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part16', 'scsi-SQEMU_QEMU_HARDDISK_e94f9961-caee-4d19-af27-42e09fc65b04-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458838 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:56:12.458846 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.458854 | orchestrator | 2025-09-04 00:56:12.458862 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-04 00:56:12.458871 | orchestrator | Thursday 04 September 2025 00:45:35 +0000 (0:00:02.553) 0:00:38.628 **** 2025-09-04 00:56:12.458883 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.458891 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.458899 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.458906 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.458914 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.458926 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.458934 | orchestrator | 2025-09-04 00:56:12.458942 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-04 00:56:12.458949 | orchestrator | Thursday 04 September 2025 00:45:37 +0000 (0:00:02.206) 0:00:40.834 **** 2025-09-04 00:56:12.458957 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.458965 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.458972 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.458980 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.458987 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.458995 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.459003 | orchestrator | 2025-09-04 00:56:12.459010 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-04 00:56:12.459018 | orchestrator | Thursday 04 September 2025 00:45:38 +0000 (0:00:01.612) 0:00:42.447 **** 2025-09-04 00:56:12.459026 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459034 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459042 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459049 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459057 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459064 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459097 | orchestrator | 2025-09-04 00:56:12.459105 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-04 00:56:12.459120 | orchestrator | Thursday 04 September 2025 00:45:40 +0000 (0:00:02.096) 0:00:44.544 **** 2025-09-04 00:56:12.459128 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459136 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459144 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459152 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459159 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459167 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459175 | orchestrator | 2025-09-04 00:56:12.459182 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-04 00:56:12.459190 | orchestrator | Thursday 04 September 2025 00:45:42 +0000 (0:00:01.078) 0:00:45.622 **** 2025-09-04 00:56:12.459198 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459206 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459214 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459221 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459229 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459237 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459244 | orchestrator | 2025-09-04 00:56:12.459252 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-04 00:56:12.459264 | orchestrator | Thursday 04 September 2025 00:45:43 +0000 (0:00:01.912) 0:00:47.535 **** 2025-09-04 00:56:12.459272 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459280 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459287 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459295 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459303 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459310 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459318 | orchestrator | 2025-09-04 00:56:12.459326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-04 00:56:12.459334 | orchestrator | Thursday 04 September 2025 00:45:45 +0000 (0:00:01.490) 0:00:49.025 **** 2025-09-04 00:56:12.459342 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-04 00:56:12.459350 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-04 00:56:12.459358 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-04 00:56:12.459366 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-04 00:56:12.459374 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-04 00:56:12.459381 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-04 00:56:12.459389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-04 00:56:12.459404 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-04 00:56:12.459412 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-04 00:56:12.459420 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-04 00:56:12.459427 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-04 00:56:12.459435 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-04 00:56:12.459443 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-04 00:56:12.459451 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-04 00:56:12.459459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-04 00:56:12.459467 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-04 00:56:12.459474 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-04 00:56:12.459482 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-04 00:56:12.459490 | orchestrator | 2025-09-04 00:56:12.459498 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-04 00:56:12.459506 | orchestrator | Thursday 04 September 2025 00:45:49 +0000 (0:00:04.475) 0:00:53.501 **** 2025-09-04 00:56:12.459514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-04 00:56:12.459522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-04 00:56:12.459530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-04 00:56:12.459537 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459545 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-04 00:56:12.459553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-04 00:56:12.459561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-04 00:56:12.459568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-04 00:56:12.459576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-04 00:56:12.459584 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-04 00:56:12.459596 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:56:12.459612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:56:12.459620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:56:12.459627 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459635 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-04 00:56:12.459643 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-04 00:56:12.459651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-04 00:56:12.459658 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459666 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459674 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-04 00:56:12.459682 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-04 00:56:12.459689 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-04 00:56:12.459697 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459705 | orchestrator | 2025-09-04 00:56:12.459713 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-04 00:56:12.459721 | orchestrator | Thursday 04 September 2025 00:45:50 +0000 (0:00:00.724) 0:00:54.226 **** 2025-09-04 00:56:12.459728 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.459736 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.459744 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.459752 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.459760 | orchestrator | 2025-09-04 00:56:12.459768 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-04 00:56:12.459780 | orchestrator | Thursday 04 September 2025 00:45:51 +0000 (0:00:01.207) 0:00:55.433 **** 2025-09-04 00:56:12.459788 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459796 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459804 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459811 | orchestrator | 2025-09-04 00:56:12.459819 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-04 00:56:12.459827 | orchestrator | Thursday 04 September 2025 00:45:52 +0000 (0:00:00.382) 0:00:55.816 **** 2025-09-04 00:56:12.459835 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459842 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459850 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459858 | orchestrator | 2025-09-04 00:56:12.459865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-04 00:56:12.459876 | orchestrator | Thursday 04 September 2025 00:45:52 +0000 (0:00:00.353) 0:00:56.169 **** 2025-09-04 00:56:12.459884 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.459892 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.459900 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.459908 | orchestrator | 2025-09-04 00:56:12.459915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-04 00:56:12.459923 | orchestrator | Thursday 04 September 2025 00:45:53 +0000 (0:00:00.655) 0:00:56.825 **** 2025-09-04 00:56:12.459931 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.459939 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.459946 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.459954 | orchestrator | 2025-09-04 00:56:12.459962 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-04 00:56:12.459970 | orchestrator | Thursday 04 September 2025 00:45:53 +0000 (0:00:00.699) 0:00:57.524 **** 2025-09-04 00:56:12.459978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.459986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.459994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.460001 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460009 | orchestrator | 2025-09-04 00:56:12.460017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-04 00:56:12.460025 | orchestrator | Thursday 04 September 2025 00:45:54 +0000 (0:00:00.398) 0:00:57.923 **** 2025-09-04 00:56:12.460032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.460040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.460048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.460055 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460063 | orchestrator | 2025-09-04 00:56:12.460108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-04 00:56:12.460116 | orchestrator | Thursday 04 September 2025 00:45:54 +0000 (0:00:00.544) 0:00:58.468 **** 2025-09-04 00:56:12.460124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.460132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.460140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.460148 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460156 | orchestrator | 2025-09-04 00:56:12.460163 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-04 00:56:12.460171 | orchestrator | Thursday 04 September 2025 00:45:55 +0000 (0:00:00.410) 0:00:58.878 **** 2025-09-04 00:56:12.460179 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.460187 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.460195 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.460203 | orchestrator | 2025-09-04 00:56:12.460211 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-04 00:56:12.460218 | orchestrator | Thursday 04 September 2025 00:45:55 +0000 (0:00:00.400) 0:00:59.278 **** 2025-09-04 00:56:12.460231 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-04 00:56:12.460240 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-04 00:56:12.460247 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-04 00:56:12.460255 | orchestrator | 2025-09-04 00:56:12.460267 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-04 00:56:12.460276 | orchestrator | Thursday 04 September 2025 00:45:56 +0000 (0:00:00.997) 0:01:00.276 **** 2025-09-04 00:56:12.460283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:56:12.460291 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.460299 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.460307 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-04 00:56:12.460315 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-04 00:56:12.460322 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-04 00:56:12.460330 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-04 00:56:12.460338 | orchestrator | 2025-09-04 00:56:12.460346 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-04 00:56:12.460353 | orchestrator | Thursday 04 September 2025 00:45:57 +0000 (0:00:00.718) 0:01:00.995 **** 2025-09-04 00:56:12.460361 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:56:12.460369 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.460377 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.460384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-04 00:56:12.460392 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-04 00:56:12.460400 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-04 00:56:12.460408 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-04 00:56:12.460415 | orchestrator | 2025-09-04 00:56:12.460423 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.460431 | orchestrator | Thursday 04 September 2025 00:45:59 +0000 (0:00:02.148) 0:01:03.143 **** 2025-09-04 00:56:12.460439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.460447 | orchestrator | 2025-09-04 00:56:12.460458 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.460467 | orchestrator | Thursday 04 September 2025 00:46:00 +0000 (0:00:01.167) 0:01:04.310 **** 2025-09-04 00:56:12.460474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.460482 | orchestrator | 2025-09-04 00:56:12.460490 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.460498 | orchestrator | Thursday 04 September 2025 00:46:01 +0000 (0:00:01.066) 0:01:05.377 **** 2025-09-04 00:56:12.460505 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460513 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.460521 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.460529 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.460537 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.460544 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.460552 | orchestrator | 2025-09-04 00:56:12.460560 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.460567 | orchestrator | Thursday 04 September 2025 00:46:03 +0000 (0:00:01.376) 0:01:06.753 **** 2025-09-04 00:56:12.460579 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.460587 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.460595 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.460603 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.460610 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.460618 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.460626 | orchestrator | 2025-09-04 00:56:12.460633 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.460640 | orchestrator | Thursday 04 September 2025 00:46:04 +0000 (0:00:01.030) 0:01:07.783 **** 2025-09-04 00:56:12.460646 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.460653 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.460659 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.460666 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.460672 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.460679 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.460685 | orchestrator | 2025-09-04 00:56:12.460692 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.460698 | orchestrator | Thursday 04 September 2025 00:46:05 +0000 (0:00:01.536) 0:01:09.320 **** 2025-09-04 00:56:12.460705 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.460712 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.460718 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.460725 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.460731 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.460738 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.460744 | orchestrator | 2025-09-04 00:56:12.460751 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.460757 | orchestrator | Thursday 04 September 2025 00:46:06 +0000 (0:00:00.983) 0:01:10.304 **** 2025-09-04 00:56:12.460764 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460770 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.460777 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.460784 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.460790 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.460797 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.460803 | orchestrator | 2025-09-04 00:56:12.460810 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.460820 | orchestrator | Thursday 04 September 2025 00:46:08 +0000 (0:00:01.596) 0:01:11.901 **** 2025-09-04 00:56:12.460827 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460834 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.460840 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.460847 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.460853 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.460860 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.460866 | orchestrator | 2025-09-04 00:56:12.460873 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.460880 | orchestrator | Thursday 04 September 2025 00:46:09 +0000 (0:00:00.708) 0:01:12.609 **** 2025-09-04 00:56:12.460886 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.460893 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.460899 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.460906 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.460912 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.460919 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.460925 | orchestrator | 2025-09-04 00:56:12.460932 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.460938 | orchestrator | Thursday 04 September 2025 00:46:09 +0000 (0:00:00.751) 0:01:13.361 **** 2025-09-04 00:56:12.460945 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.460951 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.460958 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.460968 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.460975 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.460981 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.460987 | orchestrator | 2025-09-04 00:56:12.460994 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.461001 | orchestrator | Thursday 04 September 2025 00:46:11 +0000 (0:00:01.831) 0:01:15.193 **** 2025-09-04 00:56:12.461007 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461014 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461020 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461027 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.461033 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.461040 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.461046 | orchestrator | 2025-09-04 00:56:12.461053 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.461059 | orchestrator | Thursday 04 September 2025 00:46:12 +0000 (0:00:01.182) 0:01:16.376 **** 2025-09-04 00:56:12.461075 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461083 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461089 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461096 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461102 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461109 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461115 | orchestrator | 2025-09-04 00:56:12.461125 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.461132 | orchestrator | Thursday 04 September 2025 00:46:14 +0000 (0:00:01.262) 0:01:17.638 **** 2025-09-04 00:56:12.461139 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461145 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461152 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461158 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.461165 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.461172 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.461178 | orchestrator | 2025-09-04 00:56:12.461185 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.461192 | orchestrator | Thursday 04 September 2025 00:46:14 +0000 (0:00:00.756) 0:01:18.395 **** 2025-09-04 00:56:12.461198 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461205 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461211 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461218 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461224 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461231 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461237 | orchestrator | 2025-09-04 00:56:12.461244 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.461250 | orchestrator | Thursday 04 September 2025 00:46:15 +0000 (0:00:00.870) 0:01:19.265 **** 2025-09-04 00:56:12.461257 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461268 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461280 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461293 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461304 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461312 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461318 | orchestrator | 2025-09-04 00:56:12.461325 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.461332 | orchestrator | Thursday 04 September 2025 00:46:16 +0000 (0:00:00.698) 0:01:19.963 **** 2025-09-04 00:56:12.461338 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461344 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461351 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461357 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461364 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461370 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461377 | orchestrator | 2025-09-04 00:56:12.461383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.461394 | orchestrator | Thursday 04 September 2025 00:46:17 +0000 (0:00:00.970) 0:01:20.934 **** 2025-09-04 00:56:12.461400 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461407 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461413 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461420 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461426 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461433 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461439 | orchestrator | 2025-09-04 00:56:12.461445 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.461452 | orchestrator | Thursday 04 September 2025 00:46:18 +0000 (0:00:00.678) 0:01:21.613 **** 2025-09-04 00:56:12.461459 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461465 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461472 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461478 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461484 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461491 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461497 | orchestrator | 2025-09-04 00:56:12.461508 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.461515 | orchestrator | Thursday 04 September 2025 00:46:18 +0000 (0:00:00.905) 0:01:22.518 **** 2025-09-04 00:56:12.461521 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461528 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461534 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461541 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.461548 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.461554 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.461561 | orchestrator | 2025-09-04 00:56:12.461567 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.461574 | orchestrator | Thursday 04 September 2025 00:46:19 +0000 (0:00:00.862) 0:01:23.381 **** 2025-09-04 00:56:12.461580 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461587 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461593 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461600 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.461606 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.461613 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.461619 | orchestrator | 2025-09-04 00:56:12.461626 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.461633 | orchestrator | Thursday 04 September 2025 00:46:20 +0000 (0:00:00.842) 0:01:24.223 **** 2025-09-04 00:56:12.461639 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.461645 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.461652 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.461658 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.461665 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.461671 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.461678 | orchestrator | 2025-09-04 00:56:12.461684 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-04 00:56:12.461691 | orchestrator | Thursday 04 September 2025 00:46:21 +0000 (0:00:01.251) 0:01:25.474 **** 2025-09-04 00:56:12.461697 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.461704 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.461710 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.461717 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.461723 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.461730 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.461736 | orchestrator | 2025-09-04 00:56:12.461743 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-04 00:56:12.461750 | orchestrator | Thursday 04 September 2025 00:46:23 +0000 (0:00:01.452) 0:01:26.927 **** 2025-09-04 00:56:12.461756 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.461763 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.461773 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.461779 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.461786 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.461793 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.461799 | orchestrator | 2025-09-04 00:56:12.461809 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-04 00:56:12.461816 | orchestrator | Thursday 04 September 2025 00:46:25 +0000 (0:00:02.016) 0:01:28.943 **** 2025-09-04 00:56:12.461822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.461829 | orchestrator | 2025-09-04 00:56:12.461836 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-04 00:56:12.461842 | orchestrator | Thursday 04 September 2025 00:46:26 +0000 (0:00:01.171) 0:01:30.115 **** 2025-09-04 00:56:12.461849 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461855 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461862 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461868 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461875 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461881 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461888 | orchestrator | 2025-09-04 00:56:12.461894 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-04 00:56:12.461901 | orchestrator | Thursday 04 September 2025 00:46:27 +0000 (0:00:00.596) 0:01:30.712 **** 2025-09-04 00:56:12.461907 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.461914 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.461920 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.461927 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.461933 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.461940 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.461946 | orchestrator | 2025-09-04 00:56:12.461953 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-04 00:56:12.461960 | orchestrator | Thursday 04 September 2025 00:46:27 +0000 (0:00:00.820) 0:01:31.532 **** 2025-09-04 00:56:12.461966 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.461973 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.461980 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.461986 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.461993 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.461999 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-04 00:56:12.462006 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462012 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462182 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462195 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462207 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462237 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-04 00:56:12.462244 | orchestrator | 2025-09-04 00:56:12.462250 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-04 00:56:12.462256 | orchestrator | Thursday 04 September 2025 00:46:29 +0000 (0:00:01.195) 0:01:32.727 **** 2025-09-04 00:56:12.462262 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.462269 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.462275 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.462287 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.462293 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.462300 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.462306 | orchestrator | 2025-09-04 00:56:12.462312 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-04 00:56:12.462318 | orchestrator | Thursday 04 September 2025 00:46:30 +0000 (0:00:01.152) 0:01:33.880 **** 2025-09-04 00:56:12.462324 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462330 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462336 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462342 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462348 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462354 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462360 | orchestrator | 2025-09-04 00:56:12.462366 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-04 00:56:12.462372 | orchestrator | Thursday 04 September 2025 00:46:31 +0000 (0:00:00.739) 0:01:34.619 **** 2025-09-04 00:56:12.462378 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462385 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462391 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462397 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462403 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462409 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462415 | orchestrator | 2025-09-04 00:56:12.462421 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-04 00:56:12.462427 | orchestrator | Thursday 04 September 2025 00:46:31 +0000 (0:00:00.803) 0:01:35.423 **** 2025-09-04 00:56:12.462433 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462439 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462445 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462452 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462458 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462464 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462470 | orchestrator | 2025-09-04 00:56:12.462476 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-04 00:56:12.462482 | orchestrator | Thursday 04 September 2025 00:46:32 +0000 (0:00:00.579) 0:01:36.002 **** 2025-09-04 00:56:12.462492 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.462499 | orchestrator | 2025-09-04 00:56:12.462505 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-04 00:56:12.462512 | orchestrator | Thursday 04 September 2025 00:46:33 +0000 (0:00:01.281) 0:01:37.283 **** 2025-09-04 00:56:12.462518 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.462524 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.462530 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.462536 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.462542 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.462548 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.462554 | orchestrator | 2025-09-04 00:56:12.462561 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-04 00:56:12.462567 | orchestrator | Thursday 04 September 2025 00:47:28 +0000 (0:00:54.767) 0:02:32.051 **** 2025-09-04 00:56:12.462573 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462579 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-04 00:56:12.462585 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462591 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462597 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462604 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-04 00:56:12.462613 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462620 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462626 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462632 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-04 00:56:12.462638 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462644 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462650 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462656 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-04 00:56:12.462663 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462669 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462675 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462681 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-04 00:56:12.462687 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462693 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462700 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-04 00:56:12.462722 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2) 2025-09-04 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:12.462729 | orchestrator | [0m 2025-09-04 00:56:12.462736 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-04 00:56:12.462742 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462748 | orchestrator | 2025-09-04 00:56:12.462754 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-04 00:56:12.462760 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:00.629) 0:02:32.680 **** 2025-09-04 00:56:12.462766 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462772 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462780 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462787 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462794 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462801 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462808 | orchestrator | 2025-09-04 00:56:12.462815 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-04 00:56:12.462822 | orchestrator | Thursday 04 September 2025 00:47:29 +0000 (0:00:00.790) 0:02:33.471 **** 2025-09-04 00:56:12.462829 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462837 | orchestrator | 2025-09-04 00:56:12.462843 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-04 00:56:12.462851 | orchestrator | Thursday 04 September 2025 00:47:30 +0000 (0:00:00.134) 0:02:33.605 **** 2025-09-04 00:56:12.462858 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462865 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462872 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462879 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462886 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462893 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462901 | orchestrator | 2025-09-04 00:56:12.462908 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-04 00:56:12.462915 | orchestrator | Thursday 04 September 2025 00:47:30 +0000 (0:00:00.701) 0:02:34.307 **** 2025-09-04 00:56:12.462922 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462929 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.462936 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.462943 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.462953 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.462960 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.462968 | orchestrator | 2025-09-04 00:56:12.462975 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-04 00:56:12.462982 | orchestrator | Thursday 04 September 2025 00:47:31 +0000 (0:00:00.968) 0:02:35.275 **** 2025-09-04 00:56:12.462989 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.462996 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463010 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463017 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463024 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463031 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463038 | orchestrator | 2025-09-04 00:56:12.463046 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-04 00:56:12.463053 | orchestrator | Thursday 04 September 2025 00:47:32 +0000 (0:00:00.710) 0:02:35.986 **** 2025-09-04 00:56:12.463060 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.463080 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.463087 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.463095 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.463102 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.463109 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.463116 | orchestrator | 2025-09-04 00:56:12.463123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-04 00:56:12.463130 | orchestrator | Thursday 04 September 2025 00:47:34 +0000 (0:00:02.402) 0:02:38.388 **** 2025-09-04 00:56:12.463137 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.463143 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.463149 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.463155 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.463161 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.463167 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.463173 | orchestrator | 2025-09-04 00:56:12.463179 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-04 00:56:12.463185 | orchestrator | Thursday 04 September 2025 00:47:35 +0000 (0:00:00.557) 0:02:38.946 **** 2025-09-04 00:56:12.463192 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.463198 | orchestrator | 2025-09-04 00:56:12.463204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-04 00:56:12.463210 | orchestrator | Thursday 04 September 2025 00:47:36 +0000 (0:00:01.037) 0:02:39.984 **** 2025-09-04 00:56:12.463216 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463222 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463228 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463235 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463241 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463247 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463253 | orchestrator | 2025-09-04 00:56:12.463259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-04 00:56:12.463265 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.700) 0:02:40.684 **** 2025-09-04 00:56:12.463271 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463277 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463283 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463290 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463296 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463302 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463308 | orchestrator | 2025-09-04 00:56:12.463314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-04 00:56:12.463320 | orchestrator | Thursday 04 September 2025 00:47:37 +0000 (0:00:00.562) 0:02:41.247 **** 2025-09-04 00:56:12.463327 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463337 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463343 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463365 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463372 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463378 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463384 | orchestrator | 2025-09-04 00:56:12.463391 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-04 00:56:12.463397 | orchestrator | Thursday 04 September 2025 00:47:38 +0000 (0:00:00.644) 0:02:41.891 **** 2025-09-04 00:56:12.463403 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463409 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463415 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463421 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463427 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463433 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463439 | orchestrator | 2025-09-04 00:56:12.463446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-04 00:56:12.463452 | orchestrator | Thursday 04 September 2025 00:47:38 +0000 (0:00:00.556) 0:02:42.448 **** 2025-09-04 00:56:12.463458 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463464 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463470 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463476 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463482 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463488 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463494 | orchestrator | 2025-09-04 00:56:12.463501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-04 00:56:12.463507 | orchestrator | Thursday 04 September 2025 00:47:39 +0000 (0:00:00.561) 0:02:43.009 **** 2025-09-04 00:56:12.463513 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463519 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463525 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463531 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463537 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463543 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463549 | orchestrator | 2025-09-04 00:56:12.463555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-04 00:56:12.463561 | orchestrator | Thursday 04 September 2025 00:47:40 +0000 (0:00:00.796) 0:02:43.806 **** 2025-09-04 00:56:12.463568 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463574 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463580 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463586 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463592 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463598 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463604 | orchestrator | 2025-09-04 00:56:12.463610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-04 00:56:12.463616 | orchestrator | Thursday 04 September 2025 00:47:40 +0000 (0:00:00.536) 0:02:44.343 **** 2025-09-04 00:56:12.463625 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.463632 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.463638 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.463644 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.463650 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.463656 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.463662 | orchestrator | 2025-09-04 00:56:12.463668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-04 00:56:12.463674 | orchestrator | Thursday 04 September 2025 00:47:41 +0000 (0:00:00.754) 0:02:45.097 **** 2025-09-04 00:56:12.463680 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.463686 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.463692 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.463698 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.463708 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.463714 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.463720 | orchestrator | 2025-09-04 00:56:12.463727 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-04 00:56:12.463733 | orchestrator | Thursday 04 September 2025 00:47:42 +0000 (0:00:01.074) 0:02:46.171 **** 2025-09-04 00:56:12.463739 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.463745 | orchestrator | 2025-09-04 00:56:12.463751 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-04 00:56:12.463757 | orchestrator | Thursday 04 September 2025 00:47:43 +0000 (0:00:01.230) 0:02:47.402 **** 2025-09-04 00:56:12.463763 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-04 00:56:12.463769 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-04 00:56:12.463776 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-04 00:56:12.463782 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463788 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-04 00:56:12.463794 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463800 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-04 00:56:12.463806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463818 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-04 00:56:12.463824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463830 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463837 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463843 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463849 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-04 00:56:12.463861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463895 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463901 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.463907 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-04 00:56:12.463919 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.463925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463932 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.463938 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.463944 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.463950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.463956 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.463962 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-04 00:56:12.463968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.463974 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.463981 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.463987 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.463993 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-04 00:56:12.464005 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.464011 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.464017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.464023 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464029 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-04 00:56:12.464036 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.464042 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464054 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464060 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-04 00:56:12.464075 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464084 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464091 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464097 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464103 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-04 00:56:12.464115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464121 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464127 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464133 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464139 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-04 00:56:12.464151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464163 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464170 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464176 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464182 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-04 00:56:12.464188 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464194 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464200 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464212 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464218 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-04 00:56:12.464224 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464236 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464243 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464249 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464255 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-04 00:56:12.464280 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464293 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464299 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464305 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-04 00:56:12.464311 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-04 00:56:12.464318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-04 00:56:12.464324 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464330 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464336 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-04 00:56:12.464342 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-04 00:56:12.464348 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-04 00:56:12.464354 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-04 00:56:12.464360 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-04 00:56:12.464366 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-04 00:56:12.464372 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-04 00:56:12.464378 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-04 00:56:12.464384 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-04 00:56:12.464390 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-04 00:56:12.464396 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-04 00:56:12.464402 | orchestrator | 2025-09-04 00:56:12.464409 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-04 00:56:12.464415 | orchestrator | Thursday 04 September 2025 00:47:50 +0000 (0:00:06.685) 0:02:54.088 **** 2025-09-04 00:56:12.464421 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464427 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464433 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464439 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.464445 | orchestrator | 2025-09-04 00:56:12.464451 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-04 00:56:12.464460 | orchestrator | Thursday 04 September 2025 00:47:51 +0000 (0:00:01.387) 0:02:55.475 **** 2025-09-04 00:56:12.464466 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464473 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464479 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464485 | orchestrator | 2025-09-04 00:56:12.464491 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-04 00:56:12.464497 | orchestrator | Thursday 04 September 2025 00:47:52 +0000 (0:00:00.914) 0:02:56.390 **** 2025-09-04 00:56:12.464503 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464510 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464516 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.464527 | orchestrator | 2025-09-04 00:56:12.464533 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-04 00:56:12.464539 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:01.648) 0:02:58.039 **** 2025-09-04 00:56:12.464545 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.464551 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.464557 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.464563 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464569 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464575 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464581 | orchestrator | 2025-09-04 00:56:12.464587 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-04 00:56:12.464593 | orchestrator | Thursday 04 September 2025 00:47:54 +0000 (0:00:00.473) 0:02:58.513 **** 2025-09-04 00:56:12.464600 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.464606 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.464611 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.464618 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464624 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464630 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464636 | orchestrator | 2025-09-04 00:56:12.464642 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-04 00:56:12.464648 | orchestrator | Thursday 04 September 2025 00:47:55 +0000 (0:00:00.712) 0:02:59.225 **** 2025-09-04 00:56:12.464654 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464660 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464666 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464672 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464678 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464684 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464690 | orchestrator | 2025-09-04 00:56:12.464710 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-04 00:56:12.464717 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.565) 0:02:59.791 **** 2025-09-04 00:56:12.464723 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464729 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464736 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464742 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464748 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464754 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464760 | orchestrator | 2025-09-04 00:56:12.464766 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-04 00:56:12.464772 | orchestrator | Thursday 04 September 2025 00:47:56 +0000 (0:00:00.586) 0:03:00.378 **** 2025-09-04 00:56:12.464778 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464784 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464790 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464796 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464802 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464808 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464814 | orchestrator | 2025-09-04 00:56:12.464821 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-04 00:56:12.464827 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:00.477) 0:03:00.855 **** 2025-09-04 00:56:12.464833 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464839 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464845 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464851 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464857 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464863 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464869 | orchestrator | 2025-09-04 00:56:12.464875 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-04 00:56:12.464886 | orchestrator | Thursday 04 September 2025 00:47:57 +0000 (0:00:00.658) 0:03:01.513 **** 2025-09-04 00:56:12.464892 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464898 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464904 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464910 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464916 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464922 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464928 | orchestrator | 2025-09-04 00:56:12.464934 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-04 00:56:12.464941 | orchestrator | Thursday 04 September 2025 00:47:59 +0000 (0:00:01.122) 0:03:02.636 **** 2025-09-04 00:56:12.464947 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.464953 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.464959 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.464968 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.464974 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.464980 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.464986 | orchestrator | 2025-09-04 00:56:12.464992 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-04 00:56:12.464999 | orchestrator | Thursday 04 September 2025 00:47:59 +0000 (0:00:00.764) 0:03:03.400 **** 2025-09-04 00:56:12.465005 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465011 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465017 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465023 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.465029 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.465035 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.465041 | orchestrator | 2025-09-04 00:56:12.465048 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-04 00:56:12.465054 | orchestrator | Thursday 04 September 2025 00:48:03 +0000 (0:00:04.073) 0:03:07.474 **** 2025-09-04 00:56:12.465060 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.465094 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.465101 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.465108 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465114 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465120 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465126 | orchestrator | 2025-09-04 00:56:12.465132 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-04 00:56:12.465138 | orchestrator | Thursday 04 September 2025 00:48:04 +0000 (0:00:00.642) 0:03:08.116 **** 2025-09-04 00:56:12.465144 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.465150 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.465156 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.465162 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465168 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465174 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465180 | orchestrator | 2025-09-04 00:56:12.465186 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-04 00:56:12.465192 | orchestrator | Thursday 04 September 2025 00:48:05 +0000 (0:00:00.954) 0:03:09.071 **** 2025-09-04 00:56:12.465198 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465203 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465208 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465213 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465219 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465224 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465229 | orchestrator | 2025-09-04 00:56:12.465235 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-04 00:56:12.465240 | orchestrator | Thursday 04 September 2025 00:48:06 +0000 (0:00:00.649) 0:03:09.720 **** 2025-09-04 00:56:12.465245 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.465254 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.465260 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.465265 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465286 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465292 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465297 | orchestrator | 2025-09-04 00:56:12.465303 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-04 00:56:12.465308 | orchestrator | Thursday 04 September 2025 00:48:07 +0000 (0:00:00.913) 0:03:10.634 **** 2025-09-04 00:56:12.465314 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-04 00:56:12.465321 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-04 00:56:12.465327 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465333 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-04 00:56:12.465339 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-04 00:56:12.465344 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465353 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-04 00:56:12.465359 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-04 00:56:12.465364 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465370 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465375 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465380 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465385 | orchestrator | 2025-09-04 00:56:12.465391 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-04 00:56:12.465396 | orchestrator | Thursday 04 September 2025 00:48:07 +0000 (0:00:00.746) 0:03:11.381 **** 2025-09-04 00:56:12.465401 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465407 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465412 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465417 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465422 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465428 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465433 | orchestrator | 2025-09-04 00:56:12.465438 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-04 00:56:12.465447 | orchestrator | Thursday 04 September 2025 00:48:09 +0000 (0:00:01.339) 0:03:12.720 **** 2025-09-04 00:56:12.465452 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465457 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465463 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465468 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465473 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465479 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465484 | orchestrator | 2025-09-04 00:56:12.465489 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-04 00:56:12.465495 | orchestrator | Thursday 04 September 2025 00:48:10 +0000 (0:00:01.103) 0:03:13.824 **** 2025-09-04 00:56:12.465500 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465506 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465511 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465516 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465521 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465527 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465532 | orchestrator | 2025-09-04 00:56:12.465537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-04 00:56:12.465543 | orchestrator | Thursday 04 September 2025 00:48:11 +0000 (0:00:01.189) 0:03:15.013 **** 2025-09-04 00:56:12.465548 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465553 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465558 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465564 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465569 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465574 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465580 | orchestrator | 2025-09-04 00:56:12.465585 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-04 00:56:12.465603 | orchestrator | Thursday 04 September 2025 00:48:12 +0000 (0:00:00.713) 0:03:15.726 **** 2025-09-04 00:56:12.465609 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465614 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.465620 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.465625 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465630 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465635 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465641 | orchestrator | 2025-09-04 00:56:12.465646 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-04 00:56:12.465652 | orchestrator | Thursday 04 September 2025 00:48:13 +0000 (0:00:01.192) 0:03:16.919 **** 2025-09-04 00:56:12.465657 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.465662 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465668 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.465673 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465678 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465684 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.465689 | orchestrator | 2025-09-04 00:56:12.465694 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-04 00:56:12.465700 | orchestrator | Thursday 04 September 2025 00:48:14 +0000 (0:00:00.857) 0:03:17.776 **** 2025-09-04 00:56:12.465705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.465710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.465716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.465721 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465726 | orchestrator | 2025-09-04 00:56:12.465731 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-04 00:56:12.465737 | orchestrator | Thursday 04 September 2025 00:48:14 +0000 (0:00:00.635) 0:03:18.412 **** 2025-09-04 00:56:12.465742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.465751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.465756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.465762 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465767 | orchestrator | 2025-09-04 00:56:12.465773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-04 00:56:12.465778 | orchestrator | Thursday 04 September 2025 00:48:15 +0000 (0:00:00.592) 0:03:19.004 **** 2025-09-04 00:56:12.465783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.465789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.465794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.465799 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.465805 | orchestrator | 2025-09-04 00:56:12.465814 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-04 00:56:12.465820 | orchestrator | Thursday 04 September 2025 00:48:16 +0000 (0:00:00.685) 0:03:19.689 **** 2025-09-04 00:56:12.465825 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.465831 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.465836 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.465841 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465847 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465852 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465857 | orchestrator | 2025-09-04 00:56:12.465863 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-04 00:56:12.465868 | orchestrator | Thursday 04 September 2025 00:48:16 +0000 (0:00:00.579) 0:03:20.268 **** 2025-09-04 00:56:12.465873 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-04 00:56:12.465879 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-04 00:56:12.465884 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-04 00:56:12.465890 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.465895 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-04 00:56:12.465900 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.465906 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-04 00:56:12.465911 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-04 00:56:12.465916 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.465922 | orchestrator | 2025-09-04 00:56:12.465927 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-04 00:56:12.465932 | orchestrator | Thursday 04 September 2025 00:48:18 +0000 (0:00:01.929) 0:03:22.198 **** 2025-09-04 00:56:12.465938 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.465943 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.465948 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.465954 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.465959 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.465964 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.465970 | orchestrator | 2025-09-04 00:56:12.465975 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.465980 | orchestrator | Thursday 04 September 2025 00:48:22 +0000 (0:00:03.964) 0:03:26.163 **** 2025-09-04 00:56:12.465986 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.465991 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.465996 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.466002 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.466007 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.466012 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.466035 | orchestrator | 2025-09-04 00:56:12.466041 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-04 00:56:12.466046 | orchestrator | Thursday 04 September 2025 00:48:23 +0000 (0:00:01.337) 0:03:27.500 **** 2025-09-04 00:56:12.466051 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466057 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.466074 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.466080 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.466086 | orchestrator | 2025-09-04 00:56:12.466091 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-04 00:56:12.466097 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:01.179) 0:03:28.680 **** 2025-09-04 00:56:12.466117 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.466123 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.466128 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.466133 | orchestrator | 2025-09-04 00:56:12.466139 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-04 00:56:12.466144 | orchestrator | Thursday 04 September 2025 00:48:25 +0000 (0:00:00.408) 0:03:29.088 **** 2025-09-04 00:56:12.466150 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.466155 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.466160 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.466166 | orchestrator | 2025-09-04 00:56:12.466171 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-04 00:56:12.466176 | orchestrator | Thursday 04 September 2025 00:48:26 +0000 (0:00:01.361) 0:03:30.449 **** 2025-09-04 00:56:12.466182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:56:12.466187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:56:12.466192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:56:12.466198 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.466203 | orchestrator | 2025-09-04 00:56:12.466208 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-04 00:56:12.466214 | orchestrator | Thursday 04 September 2025 00:48:27 +0000 (0:00:00.868) 0:03:31.318 **** 2025-09-04 00:56:12.466219 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.466224 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.466230 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.466235 | orchestrator | 2025-09-04 00:56:12.466240 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-04 00:56:12.466245 | orchestrator | Thursday 04 September 2025 00:48:28 +0000 (0:00:00.589) 0:03:31.907 **** 2025-09-04 00:56:12.466251 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.466256 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.466261 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.466267 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.466272 | orchestrator | 2025-09-04 00:56:12.466277 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-04 00:56:12.466283 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:01.017) 0:03:32.924 **** 2025-09-04 00:56:12.466288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.466293 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.466299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.466304 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466309 | orchestrator | 2025-09-04 00:56:12.466317 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-04 00:56:12.466323 | orchestrator | Thursday 04 September 2025 00:48:29 +0000 (0:00:00.400) 0:03:33.324 **** 2025-09-04 00:56:12.466328 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466334 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.466339 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.466344 | orchestrator | 2025-09-04 00:56:12.466349 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-04 00:56:12.466355 | orchestrator | Thursday 04 September 2025 00:48:30 +0000 (0:00:00.702) 0:03:34.027 **** 2025-09-04 00:56:12.466360 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466369 | orchestrator | 2025-09-04 00:56:12.466374 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-04 00:56:12.466379 | orchestrator | Thursday 04 September 2025 00:48:30 +0000 (0:00:00.235) 0:03:34.262 **** 2025-09-04 00:56:12.466385 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466390 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.466395 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.466400 | orchestrator | 2025-09-04 00:56:12.466406 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-04 00:56:12.466411 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.429) 0:03:34.691 **** 2025-09-04 00:56:12.466416 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466421 | orchestrator | 2025-09-04 00:56:12.466427 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-04 00:56:12.466432 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.214) 0:03:34.906 **** 2025-09-04 00:56:12.466437 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466442 | orchestrator | 2025-09-04 00:56:12.466448 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-04 00:56:12.466453 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.208) 0:03:35.115 **** 2025-09-04 00:56:12.466458 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466464 | orchestrator | 2025-09-04 00:56:12.466469 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-04 00:56:12.466474 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.132) 0:03:35.248 **** 2025-09-04 00:56:12.466479 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466485 | orchestrator | 2025-09-04 00:56:12.466490 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-04 00:56:12.466495 | orchestrator | Thursday 04 September 2025 00:48:31 +0000 (0:00:00.226) 0:03:35.475 **** 2025-09-04 00:56:12.466501 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466506 | orchestrator | 2025-09-04 00:56:12.466511 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-04 00:56:12.466516 | orchestrator | Thursday 04 September 2025 00:48:32 +0000 (0:00:00.213) 0:03:35.688 **** 2025-09-04 00:56:12.466522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.466527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.466532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.466538 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466543 | orchestrator | 2025-09-04 00:56:12.466548 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-04 00:56:12.466567 | orchestrator | Thursday 04 September 2025 00:48:32 +0000 (0:00:00.649) 0:03:36.338 **** 2025-09-04 00:56:12.466573 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466579 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.466584 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.466589 | orchestrator | 2025-09-04 00:56:12.466595 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-04 00:56:12.466600 | orchestrator | Thursday 04 September 2025 00:48:33 +0000 (0:00:00.601) 0:03:36.939 **** 2025-09-04 00:56:12.466605 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466611 | orchestrator | 2025-09-04 00:56:12.466616 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-04 00:56:12.466621 | orchestrator | Thursday 04 September 2025 00:48:33 +0000 (0:00:00.239) 0:03:37.178 **** 2025-09-04 00:56:12.466627 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466632 | orchestrator | 2025-09-04 00:56:12.466637 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-04 00:56:12.466642 | orchestrator | Thursday 04 September 2025 00:48:33 +0000 (0:00:00.217) 0:03:37.396 **** 2025-09-04 00:56:12.466648 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.466653 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.466661 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.466667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.466672 | orchestrator | 2025-09-04 00:56:12.466678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-04 00:56:12.466683 | orchestrator | Thursday 04 September 2025 00:48:34 +0000 (0:00:01.153) 0:03:38.550 **** 2025-09-04 00:56:12.466688 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.466693 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.466699 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.466704 | orchestrator | 2025-09-04 00:56:12.466709 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-04 00:56:12.466715 | orchestrator | Thursday 04 September 2025 00:48:35 +0000 (0:00:00.486) 0:03:39.037 **** 2025-09-04 00:56:12.466720 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.466725 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.466731 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.466736 | orchestrator | 2025-09-04 00:56:12.466741 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-04 00:56:12.466747 | orchestrator | Thursday 04 September 2025 00:48:37 +0000 (0:00:01.656) 0:03:40.693 **** 2025-09-04 00:56:12.466752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.466758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.466763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.466771 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466776 | orchestrator | 2025-09-04 00:56:12.466781 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-04 00:56:12.466787 | orchestrator | Thursday 04 September 2025 00:48:38 +0000 (0:00:00.904) 0:03:41.598 **** 2025-09-04 00:56:12.466792 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.466797 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.466803 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.466808 | orchestrator | 2025-09-04 00:56:12.466813 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-04 00:56:12.466819 | orchestrator | Thursday 04 September 2025 00:48:38 +0000 (0:00:00.425) 0:03:42.024 **** 2025-09-04 00:56:12.466824 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.466829 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.466835 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.466840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.466846 | orchestrator | 2025-09-04 00:56:12.466851 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-04 00:56:12.466856 | orchestrator | Thursday 04 September 2025 00:48:39 +0000 (0:00:01.113) 0:03:43.137 **** 2025-09-04 00:56:12.466862 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.466867 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.466872 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.466877 | orchestrator | 2025-09-04 00:56:12.466883 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-04 00:56:12.466888 | orchestrator | Thursday 04 September 2025 00:48:40 +0000 (0:00:00.833) 0:03:43.971 **** 2025-09-04 00:56:12.466893 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.466899 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.466904 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.466909 | orchestrator | 2025-09-04 00:56:12.466915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-04 00:56:12.466920 | orchestrator | Thursday 04 September 2025 00:48:42 +0000 (0:00:02.162) 0:03:46.134 **** 2025-09-04 00:56:12.466926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.466931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.466936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.466946 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.466952 | orchestrator | 2025-09-04 00:56:12.466957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-04 00:56:12.466962 | orchestrator | Thursday 04 September 2025 00:48:43 +0000 (0:00:00.782) 0:03:46.916 **** 2025-09-04 00:56:12.466967 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.466973 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.466978 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.466983 | orchestrator | 2025-09-04 00:56:12.466989 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-04 00:56:12.466994 | orchestrator | Thursday 04 September 2025 00:48:43 +0000 (0:00:00.313) 0:03:47.230 **** 2025-09-04 00:56:12.467000 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.467005 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.467010 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.467015 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467021 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467039 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467045 | orchestrator | 2025-09-04 00:56:12.467050 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-04 00:56:12.467056 | orchestrator | Thursday 04 September 2025 00:48:44 +0000 (0:00:00.833) 0:03:48.064 **** 2025-09-04 00:56:12.467061 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.467075 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.467081 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.467087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.467092 | orchestrator | 2025-09-04 00:56:12.467097 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-04 00:56:12.467103 | orchestrator | Thursday 04 September 2025 00:48:45 +0000 (0:00:00.839) 0:03:48.903 **** 2025-09-04 00:56:12.467108 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467113 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467119 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467124 | orchestrator | 2025-09-04 00:56:12.467129 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-04 00:56:12.467135 | orchestrator | Thursday 04 September 2025 00:48:45 +0000 (0:00:00.418) 0:03:49.322 **** 2025-09-04 00:56:12.467140 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.467145 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.467151 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.467156 | orchestrator | 2025-09-04 00:56:12.467161 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-04 00:56:12.467167 | orchestrator | Thursday 04 September 2025 00:48:47 +0000 (0:00:01.675) 0:03:50.998 **** 2025-09-04 00:56:12.467172 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:56:12.467178 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:56:12.467183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:56:12.467188 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467193 | orchestrator | 2025-09-04 00:56:12.467199 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-04 00:56:12.467204 | orchestrator | Thursday 04 September 2025 00:48:48 +0000 (0:00:00.763) 0:03:51.761 **** 2025-09-04 00:56:12.467209 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467215 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467220 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467225 | orchestrator | 2025-09-04 00:56:12.467231 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-04 00:56:12.467236 | orchestrator | 2025-09-04 00:56:12.467242 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.467247 | orchestrator | Thursday 04 September 2025 00:48:48 +0000 (0:00:00.638) 0:03:52.400 **** 2025-09-04 00:56:12.467259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.467264 | orchestrator | 2025-09-04 00:56:12.467270 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.467275 | orchestrator | Thursday 04 September 2025 00:48:49 +0000 (0:00:00.573) 0:03:52.973 **** 2025-09-04 00:56:12.467281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.467286 | orchestrator | 2025-09-04 00:56:12.467291 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.467297 | orchestrator | Thursday 04 September 2025 00:48:49 +0000 (0:00:00.476) 0:03:53.450 **** 2025-09-04 00:56:12.467302 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467307 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467312 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467318 | orchestrator | 2025-09-04 00:56:12.467323 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.467329 | orchestrator | Thursday 04 September 2025 00:48:50 +0000 (0:00:00.721) 0:03:54.171 **** 2025-09-04 00:56:12.467334 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467339 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467344 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467350 | orchestrator | 2025-09-04 00:56:12.467355 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.467360 | orchestrator | Thursday 04 September 2025 00:48:51 +0000 (0:00:00.435) 0:03:54.607 **** 2025-09-04 00:56:12.467366 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467371 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467376 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467382 | orchestrator | 2025-09-04 00:56:12.467387 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.467392 | orchestrator | Thursday 04 September 2025 00:48:51 +0000 (0:00:00.345) 0:03:54.953 **** 2025-09-04 00:56:12.467398 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467403 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467408 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467414 | orchestrator | 2025-09-04 00:56:12.467419 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.467424 | orchestrator | Thursday 04 September 2025 00:48:51 +0000 (0:00:00.318) 0:03:55.271 **** 2025-09-04 00:56:12.467430 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467435 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467440 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467446 | orchestrator | 2025-09-04 00:56:12.467451 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.467456 | orchestrator | Thursday 04 September 2025 00:48:52 +0000 (0:00:00.805) 0:03:56.078 **** 2025-09-04 00:56:12.467462 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467467 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467472 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467478 | orchestrator | 2025-09-04 00:56:12.467483 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.467488 | orchestrator | Thursday 04 September 2025 00:48:52 +0000 (0:00:00.378) 0:03:56.456 **** 2025-09-04 00:56:12.467508 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467514 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467519 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467525 | orchestrator | 2025-09-04 00:56:12.467530 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.467535 | orchestrator | Thursday 04 September 2025 00:48:53 +0000 (0:00:00.965) 0:03:57.422 **** 2025-09-04 00:56:12.467541 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467546 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467551 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467560 | orchestrator | 2025-09-04 00:56:12.467565 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.467571 | orchestrator | Thursday 04 September 2025 00:48:54 +0000 (0:00:01.031) 0:03:58.453 **** 2025-09-04 00:56:12.467576 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467581 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467587 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467592 | orchestrator | 2025-09-04 00:56:12.467597 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.467603 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:01.103) 0:03:59.557 **** 2025-09-04 00:56:12.467608 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467613 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467619 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467624 | orchestrator | 2025-09-04 00:56:12.467629 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.467635 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:00.284) 0:03:59.842 **** 2025-09-04 00:56:12.467640 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467645 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467650 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467656 | orchestrator | 2025-09-04 00:56:12.467661 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.467666 | orchestrator | Thursday 04 September 2025 00:48:56 +0000 (0:00:00.578) 0:04:00.420 **** 2025-09-04 00:56:12.467672 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467677 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467682 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467688 | orchestrator | 2025-09-04 00:56:12.467693 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.467698 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:00.268) 0:04:00.689 **** 2025-09-04 00:56:12.467704 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467709 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467714 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467720 | orchestrator | 2025-09-04 00:56:12.467725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.467730 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:00.225) 0:04:00.914 **** 2025-09-04 00:56:12.467738 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467743 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467749 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467754 | orchestrator | 2025-09-04 00:56:12.467759 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.467764 | orchestrator | Thursday 04 September 2025 00:48:57 +0000 (0:00:00.276) 0:04:01.191 **** 2025-09-04 00:56:12.467770 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467775 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467780 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467785 | orchestrator | 2025-09-04 00:56:12.467791 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.467796 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:00.489) 0:04:01.681 **** 2025-09-04 00:56:12.467801 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467807 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.467812 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.467817 | orchestrator | 2025-09-04 00:56:12.467822 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.467828 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:00.320) 0:04:02.002 **** 2025-09-04 00:56:12.467833 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467838 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467844 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467849 | orchestrator | 2025-09-04 00:56:12.467854 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.467863 | orchestrator | Thursday 04 September 2025 00:48:58 +0000 (0:00:00.320) 0:04:02.322 **** 2025-09-04 00:56:12.467868 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467874 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467879 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467884 | orchestrator | 2025-09-04 00:56:12.467890 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.467895 | orchestrator | Thursday 04 September 2025 00:48:59 +0000 (0:00:00.378) 0:04:02.701 **** 2025-09-04 00:56:12.467900 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467906 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467911 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467916 | orchestrator | 2025-09-04 00:56:12.467921 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-04 00:56:12.467927 | orchestrator | Thursday 04 September 2025 00:49:00 +0000 (0:00:00.941) 0:04:03.643 **** 2025-09-04 00:56:12.467932 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.467937 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.467943 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.467948 | orchestrator | 2025-09-04 00:56:12.467953 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-04 00:56:12.467959 | orchestrator | Thursday 04 September 2025 00:49:00 +0000 (0:00:00.342) 0:04:03.985 **** 2025-09-04 00:56:12.467964 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.467969 | orchestrator | 2025-09-04 00:56:12.467975 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-04 00:56:12.467980 | orchestrator | Thursday 04 September 2025 00:49:01 +0000 (0:00:00.735) 0:04:04.720 **** 2025-09-04 00:56:12.467985 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.467991 | orchestrator | 2025-09-04 00:56:12.468008 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-04 00:56:12.468015 | orchestrator | Thursday 04 September 2025 00:49:01 +0000 (0:00:00.131) 0:04:04.852 **** 2025-09-04 00:56:12.468020 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-04 00:56:12.468025 | orchestrator | 2025-09-04 00:56:12.468031 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-04 00:56:12.468036 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:00.733) 0:04:05.585 **** 2025-09-04 00:56:12.468041 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468047 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468052 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468057 | orchestrator | 2025-09-04 00:56:12.468063 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-04 00:56:12.468076 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:00.270) 0:04:05.855 **** 2025-09-04 00:56:12.468082 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468087 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468093 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468098 | orchestrator | 2025-09-04 00:56:12.468103 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-04 00:56:12.468109 | orchestrator | Thursday 04 September 2025 00:49:02 +0000 (0:00:00.304) 0:04:06.160 **** 2025-09-04 00:56:12.468114 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468119 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468125 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468130 | orchestrator | 2025-09-04 00:56:12.468135 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-04 00:56:12.468141 | orchestrator | Thursday 04 September 2025 00:49:03 +0000 (0:00:01.277) 0:04:07.437 **** 2025-09-04 00:56:12.468146 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468152 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468157 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468162 | orchestrator | 2025-09-04 00:56:12.468173 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-04 00:56:12.468179 | orchestrator | Thursday 04 September 2025 00:49:04 +0000 (0:00:00.989) 0:04:08.427 **** 2025-09-04 00:56:12.468184 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468189 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468195 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468200 | orchestrator | 2025-09-04 00:56:12.468205 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-04 00:56:12.468211 | orchestrator | Thursday 04 September 2025 00:49:05 +0000 (0:00:00.663) 0:04:09.091 **** 2025-09-04 00:56:12.468216 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468221 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468227 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468232 | orchestrator | 2025-09-04 00:56:12.468237 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-04 00:56:12.468246 | orchestrator | Thursday 04 September 2025 00:49:06 +0000 (0:00:00.744) 0:04:09.835 **** 2025-09-04 00:56:12.468251 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468257 | orchestrator | 2025-09-04 00:56:12.468262 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-04 00:56:12.468267 | orchestrator | Thursday 04 September 2025 00:49:07 +0000 (0:00:01.267) 0:04:11.103 **** 2025-09-04 00:56:12.468273 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468278 | orchestrator | 2025-09-04 00:56:12.468283 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-04 00:56:12.468289 | orchestrator | Thursday 04 September 2025 00:49:08 +0000 (0:00:00.696) 0:04:11.800 **** 2025-09-04 00:56:12.468294 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.468299 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.468305 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.468310 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-04 00:56:12.468316 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:56:12.468321 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:56:12.468326 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:56:12.468332 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-04 00:56:12.468337 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:56:12.468342 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-04 00:56:12.468348 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-04 00:56:12.468353 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-04 00:56:12.468359 | orchestrator | 2025-09-04 00:56:12.468364 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-04 00:56:12.468369 | orchestrator | Thursday 04 September 2025 00:49:12 +0000 (0:00:03.828) 0:04:15.628 **** 2025-09-04 00:56:12.468375 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468380 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468386 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468391 | orchestrator | 2025-09-04 00:56:12.468396 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-04 00:56:12.468402 | orchestrator | Thursday 04 September 2025 00:49:13 +0000 (0:00:01.570) 0:04:17.198 **** 2025-09-04 00:56:12.468407 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468412 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468418 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468423 | orchestrator | 2025-09-04 00:56:12.468429 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-04 00:56:12.468434 | orchestrator | Thursday 04 September 2025 00:49:14 +0000 (0:00:00.367) 0:04:17.566 **** 2025-09-04 00:56:12.468439 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468445 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468453 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468459 | orchestrator | 2025-09-04 00:56:12.468464 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-04 00:56:12.468469 | orchestrator | Thursday 04 September 2025 00:49:14 +0000 (0:00:00.314) 0:04:17.880 **** 2025-09-04 00:56:12.468475 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468494 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468500 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468506 | orchestrator | 2025-09-04 00:56:12.468511 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-04 00:56:12.468517 | orchestrator | Thursday 04 September 2025 00:49:16 +0000 (0:00:01.771) 0:04:19.652 **** 2025-09-04 00:56:12.468522 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468527 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468533 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468538 | orchestrator | 2025-09-04 00:56:12.468543 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-04 00:56:12.468549 | orchestrator | Thursday 04 September 2025 00:49:17 +0000 (0:00:01.667) 0:04:21.319 **** 2025-09-04 00:56:12.468554 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.468559 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.468565 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.468570 | orchestrator | 2025-09-04 00:56:12.468575 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-04 00:56:12.468581 | orchestrator | Thursday 04 September 2025 00:49:18 +0000 (0:00:00.325) 0:04:21.645 **** 2025-09-04 00:56:12.468586 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.468591 | orchestrator | 2025-09-04 00:56:12.468597 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-04 00:56:12.468602 | orchestrator | Thursday 04 September 2025 00:49:18 +0000 (0:00:00.540) 0:04:22.185 **** 2025-09-04 00:56:12.468607 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.468613 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.468618 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.468623 | orchestrator | 2025-09-04 00:56:12.468628 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-04 00:56:12.468634 | orchestrator | Thursday 04 September 2025 00:49:19 +0000 (0:00:00.586) 0:04:22.771 **** 2025-09-04 00:56:12.468639 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.468644 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.468650 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.468655 | orchestrator | 2025-09-04 00:56:12.468660 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-04 00:56:12.468666 | orchestrator | Thursday 04 September 2025 00:49:19 +0000 (0:00:00.305) 0:04:23.077 **** 2025-09-04 00:56:12.468671 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.468676 | orchestrator | 2025-09-04 00:56:12.468681 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-04 00:56:12.468691 | orchestrator | Thursday 04 September 2025 00:49:20 +0000 (0:00:00.528) 0:04:23.606 **** 2025-09-04 00:56:12.468697 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468702 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468707 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468713 | orchestrator | 2025-09-04 00:56:12.468718 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-04 00:56:12.468723 | orchestrator | Thursday 04 September 2025 00:49:22 +0000 (0:00:02.398) 0:04:26.004 **** 2025-09-04 00:56:12.468728 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468734 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468739 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468745 | orchestrator | 2025-09-04 00:56:12.468750 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-04 00:56:12.468759 | orchestrator | Thursday 04 September 2025 00:49:23 +0000 (0:00:01.200) 0:04:27.205 **** 2025-09-04 00:56:12.468764 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468769 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468774 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468780 | orchestrator | 2025-09-04 00:56:12.468785 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-04 00:56:12.468790 | orchestrator | Thursday 04 September 2025 00:49:25 +0000 (0:00:01.822) 0:04:29.027 **** 2025-09-04 00:56:12.468796 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.468801 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.468806 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.468811 | orchestrator | 2025-09-04 00:56:12.468817 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-04 00:56:12.468822 | orchestrator | Thursday 04 September 2025 00:49:27 +0000 (0:00:01.987) 0:04:31.015 **** 2025-09-04 00:56:12.468827 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.468833 | orchestrator | 2025-09-04 00:56:12.468838 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-04 00:56:12.468843 | orchestrator | Thursday 04 September 2025 00:49:28 +0000 (0:00:00.789) 0:04:31.804 **** 2025-09-04 00:56:12.468848 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-04 00:56:12.468854 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468859 | orchestrator | 2025-09-04 00:56:12.468864 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-04 00:56:12.468870 | orchestrator | Thursday 04 September 2025 00:49:50 +0000 (0:00:21.903) 0:04:53.707 **** 2025-09-04 00:56:12.468875 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.468880 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.468886 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.468891 | orchestrator | 2025-09-04 00:56:12.468896 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-04 00:56:12.468902 | orchestrator | Thursday 04 September 2025 00:50:00 +0000 (0:00:10.728) 0:05:04.436 **** 2025-09-04 00:56:12.468907 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.468912 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.468917 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.468923 | orchestrator | 2025-09-04 00:56:12.468928 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-04 00:56:12.468947 | orchestrator | Thursday 04 September 2025 00:50:01 +0000 (0:00:00.328) 0:05:04.764 **** 2025-09-04 00:56:12.468955 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-04 00:56:12.468962 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-04 00:56:12.468968 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-04 00:56:12.468975 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-04 00:56:12.468988 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-04 00:56:12.468994 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__023eb8bb424ed13114a4d3333eef1f81cf3759ca'}])  2025-09-04 00:56:12.469001 | orchestrator | 2025-09-04 00:56:12.469006 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.469012 | orchestrator | Thursday 04 September 2025 00:50:16 +0000 (0:00:15.080) 0:05:19.845 **** 2025-09-04 00:56:12.469017 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469022 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469028 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469033 | orchestrator | 2025-09-04 00:56:12.469038 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-04 00:56:12.469044 | orchestrator | Thursday 04 September 2025 00:50:16 +0000 (0:00:00.418) 0:05:20.264 **** 2025-09-04 00:56:12.469049 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.469054 | orchestrator | 2025-09-04 00:56:12.469060 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-04 00:56:12.469065 | orchestrator | Thursday 04 September 2025 00:50:17 +0000 (0:00:00.808) 0:05:21.073 **** 2025-09-04 00:56:12.469094 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469099 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469105 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469110 | orchestrator | 2025-09-04 00:56:12.469115 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-04 00:56:12.469121 | orchestrator | Thursday 04 September 2025 00:50:17 +0000 (0:00:00.320) 0:05:21.393 **** 2025-09-04 00:56:12.469126 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469132 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469137 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469143 | orchestrator | 2025-09-04 00:56:12.469148 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-04 00:56:12.469153 | orchestrator | Thursday 04 September 2025 00:50:18 +0000 (0:00:00.369) 0:05:21.763 **** 2025-09-04 00:56:12.469159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:56:12.469164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:56:12.469169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:56:12.469175 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469180 | orchestrator | 2025-09-04 00:56:12.469186 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-04 00:56:12.469191 | orchestrator | Thursday 04 September 2025 00:50:19 +0000 (0:00:00.851) 0:05:22.615 **** 2025-09-04 00:56:12.469196 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469202 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469222 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469228 | orchestrator | 2025-09-04 00:56:12.469234 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-04 00:56:12.469244 | orchestrator | 2025-09-04 00:56:12.469250 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.469255 | orchestrator | Thursday 04 September 2025 00:50:19 +0000 (0:00:00.830) 0:05:23.446 **** 2025-09-04 00:56:12.469261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.469266 | orchestrator | 2025-09-04 00:56:12.469272 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.469277 | orchestrator | Thursday 04 September 2025 00:50:20 +0000 (0:00:00.527) 0:05:23.974 **** 2025-09-04 00:56:12.469282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.469288 | orchestrator | 2025-09-04 00:56:12.469292 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.469297 | orchestrator | Thursday 04 September 2025 00:50:21 +0000 (0:00:00.754) 0:05:24.728 **** 2025-09-04 00:56:12.469302 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469307 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469312 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469316 | orchestrator | 2025-09-04 00:56:12.469321 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.469326 | orchestrator | Thursday 04 September 2025 00:50:21 +0000 (0:00:00.738) 0:05:25.467 **** 2025-09-04 00:56:12.469331 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469335 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469340 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469345 | orchestrator | 2025-09-04 00:56:12.469350 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.469354 | orchestrator | Thursday 04 September 2025 00:50:22 +0000 (0:00:00.304) 0:05:25.772 **** 2025-09-04 00:56:12.469359 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469364 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469369 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469373 | orchestrator | 2025-09-04 00:56:12.469378 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.469383 | orchestrator | Thursday 04 September 2025 00:50:22 +0000 (0:00:00.300) 0:05:26.073 **** 2025-09-04 00:56:12.469388 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469392 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469397 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469402 | orchestrator | 2025-09-04 00:56:12.469409 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.469414 | orchestrator | Thursday 04 September 2025 00:50:22 +0000 (0:00:00.282) 0:05:26.356 **** 2025-09-04 00:56:12.469419 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469424 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469428 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469433 | orchestrator | 2025-09-04 00:56:12.469438 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.469443 | orchestrator | Thursday 04 September 2025 00:50:23 +0000 (0:00:01.082) 0:05:27.438 **** 2025-09-04 00:56:12.469447 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469452 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469457 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469462 | orchestrator | 2025-09-04 00:56:12.469466 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.469471 | orchestrator | Thursday 04 September 2025 00:50:24 +0000 (0:00:00.333) 0:05:27.771 **** 2025-09-04 00:56:12.469476 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469481 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469485 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469490 | orchestrator | 2025-09-04 00:56:12.469495 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.469502 | orchestrator | Thursday 04 September 2025 00:50:24 +0000 (0:00:00.314) 0:05:28.086 **** 2025-09-04 00:56:12.469507 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469512 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469516 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469521 | orchestrator | 2025-09-04 00:56:12.469526 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.469531 | orchestrator | Thursday 04 September 2025 00:50:25 +0000 (0:00:00.860) 0:05:28.946 **** 2025-09-04 00:56:12.469535 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469540 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469545 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469550 | orchestrator | 2025-09-04 00:56:12.469554 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.469559 | orchestrator | Thursday 04 September 2025 00:50:26 +0000 (0:00:01.012) 0:05:29.959 **** 2025-09-04 00:56:12.469564 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469569 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469574 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469578 | orchestrator | 2025-09-04 00:56:12.469583 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.469588 | orchestrator | Thursday 04 September 2025 00:50:26 +0000 (0:00:00.303) 0:05:30.263 **** 2025-09-04 00:56:12.469593 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469597 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469602 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469607 | orchestrator | 2025-09-04 00:56:12.469612 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.469617 | orchestrator | Thursday 04 September 2025 00:50:27 +0000 (0:00:00.301) 0:05:30.565 **** 2025-09-04 00:56:12.469621 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469626 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469631 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469636 | orchestrator | 2025-09-04 00:56:12.469640 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.469658 | orchestrator | Thursday 04 September 2025 00:50:27 +0000 (0:00:00.339) 0:05:30.904 **** 2025-09-04 00:56:12.469663 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469668 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469672 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469677 | orchestrator | 2025-09-04 00:56:12.469682 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.469686 | orchestrator | Thursday 04 September 2025 00:50:27 +0000 (0:00:00.547) 0:05:31.452 **** 2025-09-04 00:56:12.469691 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469696 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469701 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469705 | orchestrator | 2025-09-04 00:56:12.469710 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.469715 | orchestrator | Thursday 04 September 2025 00:50:28 +0000 (0:00:00.320) 0:05:31.772 **** 2025-09-04 00:56:12.469720 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469724 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469729 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469734 | orchestrator | 2025-09-04 00:56:12.469738 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.469743 | orchestrator | Thursday 04 September 2025 00:50:28 +0000 (0:00:00.301) 0:05:32.074 **** 2025-09-04 00:56:12.469748 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469753 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469757 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469762 | orchestrator | 2025-09-04 00:56:12.469767 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.469772 | orchestrator | Thursday 04 September 2025 00:50:28 +0000 (0:00:00.285) 0:05:32.359 **** 2025-09-04 00:56:12.469780 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469784 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469789 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469794 | orchestrator | 2025-09-04 00:56:12.469799 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.469803 | orchestrator | Thursday 04 September 2025 00:50:29 +0000 (0:00:00.570) 0:05:32.930 **** 2025-09-04 00:56:12.469808 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469813 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469817 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469822 | orchestrator | 2025-09-04 00:56:12.469827 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.469832 | orchestrator | Thursday 04 September 2025 00:50:29 +0000 (0:00:00.292) 0:05:33.223 **** 2025-09-04 00:56:12.469836 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.469841 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.469846 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.469850 | orchestrator | 2025-09-04 00:56:12.469855 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-04 00:56:12.469862 | orchestrator | Thursday 04 September 2025 00:50:30 +0000 (0:00:00.559) 0:05:33.783 **** 2025-09-04 00:56:12.469867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-04 00:56:12.469872 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.469876 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.469881 | orchestrator | 2025-09-04 00:56:12.469886 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-04 00:56:12.469891 | orchestrator | Thursday 04 September 2025 00:50:31 +0000 (0:00:00.900) 0:05:34.683 **** 2025-09-04 00:56:12.469895 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.469900 | orchestrator | 2025-09-04 00:56:12.469905 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-04 00:56:12.469910 | orchestrator | Thursday 04 September 2025 00:50:31 +0000 (0:00:00.750) 0:05:35.433 **** 2025-09-04 00:56:12.469914 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.469919 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.469924 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.469929 | orchestrator | 2025-09-04 00:56:12.469934 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-04 00:56:12.469938 | orchestrator | Thursday 04 September 2025 00:50:32 +0000 (0:00:00.684) 0:05:36.117 **** 2025-09-04 00:56:12.469943 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.469948 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.469952 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.469957 | orchestrator | 2025-09-04 00:56:12.469962 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-04 00:56:12.469967 | orchestrator | Thursday 04 September 2025 00:50:32 +0000 (0:00:00.312) 0:05:36.430 **** 2025-09-04 00:56:12.469971 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.469976 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.469981 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.469986 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-04 00:56:12.469990 | orchestrator | 2025-09-04 00:56:12.469995 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-04 00:56:12.470000 | orchestrator | Thursday 04 September 2025 00:50:43 +0000 (0:00:11.008) 0:05:47.438 **** 2025-09-04 00:56:12.470005 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.470010 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.470027 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470033 | orchestrator | 2025-09-04 00:56:12.470038 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-04 00:56:12.470046 | orchestrator | Thursday 04 September 2025 00:50:44 +0000 (0:00:00.619) 0:05:48.058 **** 2025-09-04 00:56:12.470051 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-04 00:56:12.470055 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-04 00:56:12.470060 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-04 00:56:12.470065 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.470077 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.470095 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.470101 | orchestrator | 2025-09-04 00:56:12.470105 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-04 00:56:12.470110 | orchestrator | Thursday 04 September 2025 00:50:46 +0000 (0:00:02.317) 0:05:50.375 **** 2025-09-04 00:56:12.470115 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-04 00:56:12.470120 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-04 00:56:12.470124 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-04 00:56:12.470129 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 00:56:12.470134 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-04 00:56:12.470139 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-04 00:56:12.470143 | orchestrator | 2025-09-04 00:56:12.470148 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-04 00:56:12.470153 | orchestrator | Thursday 04 September 2025 00:50:48 +0000 (0:00:01.242) 0:05:51.618 **** 2025-09-04 00:56:12.470158 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.470163 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.470167 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470172 | orchestrator | 2025-09-04 00:56:12.470177 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-04 00:56:12.470182 | orchestrator | Thursday 04 September 2025 00:50:48 +0000 (0:00:00.701) 0:05:52.319 **** 2025-09-04 00:56:12.470186 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470191 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.470196 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.470201 | orchestrator | 2025-09-04 00:56:12.470206 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-04 00:56:12.470211 | orchestrator | Thursday 04 September 2025 00:50:49 +0000 (0:00:00.592) 0:05:52.912 **** 2025-09-04 00:56:12.470215 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470220 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.470225 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.470230 | orchestrator | 2025-09-04 00:56:12.470234 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-04 00:56:12.470239 | orchestrator | Thursday 04 September 2025 00:50:49 +0000 (0:00:00.297) 0:05:53.209 **** 2025-09-04 00:56:12.470244 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.470249 | orchestrator | 2025-09-04 00:56:12.470253 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-04 00:56:12.470258 | orchestrator | Thursday 04 September 2025 00:50:50 +0000 (0:00:00.535) 0:05:53.745 **** 2025-09-04 00:56:12.470263 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470268 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.470275 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.470280 | orchestrator | 2025-09-04 00:56:12.470285 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-04 00:56:12.470289 | orchestrator | Thursday 04 September 2025 00:50:50 +0000 (0:00:00.308) 0:05:54.053 **** 2025-09-04 00:56:12.470294 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470299 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.470303 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.470308 | orchestrator | 2025-09-04 00:56:12.470313 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-04 00:56:12.470322 | orchestrator | Thursday 04 September 2025 00:50:51 +0000 (0:00:00.586) 0:05:54.640 **** 2025-09-04 00:56:12.470327 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.470332 | orchestrator | 2025-09-04 00:56:12.470336 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-04 00:56:12.470341 | orchestrator | Thursday 04 September 2025 00:50:51 +0000 (0:00:00.547) 0:05:55.187 **** 2025-09-04 00:56:12.470346 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470351 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470355 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470360 | orchestrator | 2025-09-04 00:56:12.470365 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-04 00:56:12.470370 | orchestrator | Thursday 04 September 2025 00:50:52 +0000 (0:00:01.250) 0:05:56.438 **** 2025-09-04 00:56:12.470374 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470379 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470384 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470389 | orchestrator | 2025-09-04 00:56:12.470393 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-04 00:56:12.470398 | orchestrator | Thursday 04 September 2025 00:50:54 +0000 (0:00:01.485) 0:05:57.923 **** 2025-09-04 00:56:12.470403 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470408 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470412 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470417 | orchestrator | 2025-09-04 00:56:12.470422 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-04 00:56:12.470426 | orchestrator | Thursday 04 September 2025 00:50:56 +0000 (0:00:01.739) 0:05:59.663 **** 2025-09-04 00:56:12.470431 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470436 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470441 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470445 | orchestrator | 2025-09-04 00:56:12.470450 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-04 00:56:12.470455 | orchestrator | Thursday 04 September 2025 00:50:58 +0000 (0:00:02.066) 0:06:01.729 **** 2025-09-04 00:56:12.470460 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470464 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.470469 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-04 00:56:12.470474 | orchestrator | 2025-09-04 00:56:12.470479 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-04 00:56:12.470483 | orchestrator | Thursday 04 September 2025 00:50:58 +0000 (0:00:00.440) 0:06:02.169 **** 2025-09-04 00:56:12.470500 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-04 00:56:12.470506 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-04 00:56:12.470511 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-04 00:56:12.470515 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-04 00:56:12.470520 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-04 00:56:12.470525 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.470530 | orchestrator | 2025-09-04 00:56:12.470535 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-04 00:56:12.470540 | orchestrator | Thursday 04 September 2025 00:51:29 +0000 (0:00:30.942) 0:06:33.112 **** 2025-09-04 00:56:12.470544 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.470549 | orchestrator | 2025-09-04 00:56:12.470554 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-04 00:56:12.470562 | orchestrator | Thursday 04 September 2025 00:51:30 +0000 (0:00:01.334) 0:06:34.447 **** 2025-09-04 00:56:12.470567 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470571 | orchestrator | 2025-09-04 00:56:12.470576 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-04 00:56:12.470581 | orchestrator | Thursday 04 September 2025 00:51:31 +0000 (0:00:00.322) 0:06:34.769 **** 2025-09-04 00:56:12.470585 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470590 | orchestrator | 2025-09-04 00:56:12.470595 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-04 00:56:12.470600 | orchestrator | Thursday 04 September 2025 00:51:31 +0000 (0:00:00.141) 0:06:34.910 **** 2025-09-04 00:56:12.470604 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-04 00:56:12.470609 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-04 00:56:12.470614 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-04 00:56:12.470618 | orchestrator | 2025-09-04 00:56:12.470623 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-04 00:56:12.470628 | orchestrator | Thursday 04 September 2025 00:51:37 +0000 (0:00:06.537) 0:06:41.448 **** 2025-09-04 00:56:12.470633 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-04 00:56:12.470637 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-04 00:56:12.470642 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-04 00:56:12.470647 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-04 00:56:12.470652 | orchestrator | 2025-09-04 00:56:12.470656 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.470661 | orchestrator | Thursday 04 September 2025 00:51:42 +0000 (0:00:04.669) 0:06:46.117 **** 2025-09-04 00:56:12.470666 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470670 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470675 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470680 | orchestrator | 2025-09-04 00:56:12.470684 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-04 00:56:12.470689 | orchestrator | Thursday 04 September 2025 00:51:43 +0000 (0:00:00.976) 0:06:47.094 **** 2025-09-04 00:56:12.470694 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.470698 | orchestrator | 2025-09-04 00:56:12.470703 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-04 00:56:12.470708 | orchestrator | Thursday 04 September 2025 00:51:44 +0000 (0:00:00.519) 0:06:47.614 **** 2025-09-04 00:56:12.470713 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.470717 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.470722 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470727 | orchestrator | 2025-09-04 00:56:12.470732 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-04 00:56:12.470736 | orchestrator | Thursday 04 September 2025 00:51:44 +0000 (0:00:00.313) 0:06:47.928 **** 2025-09-04 00:56:12.470741 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.470746 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.470750 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.470755 | orchestrator | 2025-09-04 00:56:12.470760 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-04 00:56:12.470765 | orchestrator | Thursday 04 September 2025 00:51:45 +0000 (0:00:01.430) 0:06:49.359 **** 2025-09-04 00:56:12.470769 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-04 00:56:12.470774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-04 00:56:12.470779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-04 00:56:12.470786 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.470791 | orchestrator | 2025-09-04 00:56:12.470796 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-04 00:56:12.470800 | orchestrator | Thursday 04 September 2025 00:51:46 +0000 (0:00:00.648) 0:06:50.007 **** 2025-09-04 00:56:12.470839 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.470849 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.470854 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.470859 | orchestrator | 2025-09-04 00:56:12.470864 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-04 00:56:12.470868 | orchestrator | 2025-09-04 00:56:12.470873 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.470878 | orchestrator | Thursday 04 September 2025 00:51:46 +0000 (0:00:00.544) 0:06:50.552 **** 2025-09-04 00:56:12.470896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.470902 | orchestrator | 2025-09-04 00:56:12.470906 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.470911 | orchestrator | Thursday 04 September 2025 00:51:47 +0000 (0:00:00.716) 0:06:51.268 **** 2025-09-04 00:56:12.470916 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.470921 | orchestrator | 2025-09-04 00:56:12.470926 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.470930 | orchestrator | Thursday 04 September 2025 00:51:48 +0000 (0:00:00.511) 0:06:51.780 **** 2025-09-04 00:56:12.470935 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.470940 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.470944 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.470949 | orchestrator | 2025-09-04 00:56:12.470954 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.470959 | orchestrator | Thursday 04 September 2025 00:51:48 +0000 (0:00:00.284) 0:06:52.064 **** 2025-09-04 00:56:12.470963 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.470968 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.470973 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.470978 | orchestrator | 2025-09-04 00:56:12.470982 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.470987 | orchestrator | Thursday 04 September 2025 00:51:49 +0000 (0:00:00.946) 0:06:53.011 **** 2025-09-04 00:56:12.470992 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.470997 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471001 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471006 | orchestrator | 2025-09-04 00:56:12.471011 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.471016 | orchestrator | Thursday 04 September 2025 00:51:50 +0000 (0:00:00.676) 0:06:53.688 **** 2025-09-04 00:56:12.471020 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471025 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471030 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471034 | orchestrator | 2025-09-04 00:56:12.471039 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.471044 | orchestrator | Thursday 04 September 2025 00:51:50 +0000 (0:00:00.686) 0:06:54.374 **** 2025-09-04 00:56:12.471049 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471053 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471058 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471063 | orchestrator | 2025-09-04 00:56:12.471076 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.471083 | orchestrator | Thursday 04 September 2025 00:51:51 +0000 (0:00:00.302) 0:06:54.676 **** 2025-09-04 00:56:12.471088 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471093 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471098 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471106 | orchestrator | 2025-09-04 00:56:12.471111 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.471115 | orchestrator | Thursday 04 September 2025 00:51:51 +0000 (0:00:00.568) 0:06:55.245 **** 2025-09-04 00:56:12.471120 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471125 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471130 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471134 | orchestrator | 2025-09-04 00:56:12.471139 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.471144 | orchestrator | Thursday 04 September 2025 00:51:51 +0000 (0:00:00.303) 0:06:55.549 **** 2025-09-04 00:56:12.471148 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471153 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471158 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471163 | orchestrator | 2025-09-04 00:56:12.471167 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.471172 | orchestrator | Thursday 04 September 2025 00:51:52 +0000 (0:00:00.708) 0:06:56.257 **** 2025-09-04 00:56:12.471177 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471181 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471186 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471191 | orchestrator | 2025-09-04 00:56:12.471195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.471200 | orchestrator | Thursday 04 September 2025 00:51:53 +0000 (0:00:00.875) 0:06:57.133 **** 2025-09-04 00:56:12.471205 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471210 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471214 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471219 | orchestrator | 2025-09-04 00:56:12.471224 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.471229 | orchestrator | Thursday 04 September 2025 00:51:54 +0000 (0:00:00.601) 0:06:57.734 **** 2025-09-04 00:56:12.471233 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471238 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471243 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471247 | orchestrator | 2025-09-04 00:56:12.471252 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.471257 | orchestrator | Thursday 04 September 2025 00:51:54 +0000 (0:00:00.300) 0:06:58.035 **** 2025-09-04 00:56:12.471262 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471266 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471271 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471276 | orchestrator | 2025-09-04 00:56:12.471281 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.471285 | orchestrator | Thursday 04 September 2025 00:51:54 +0000 (0:00:00.312) 0:06:58.348 **** 2025-09-04 00:56:12.471290 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471295 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471299 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471304 | orchestrator | 2025-09-04 00:56:12.471309 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.471314 | orchestrator | Thursday 04 September 2025 00:51:55 +0000 (0:00:00.334) 0:06:58.682 **** 2025-09-04 00:56:12.471318 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471323 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471330 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471335 | orchestrator | 2025-09-04 00:56:12.471340 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.471345 | orchestrator | Thursday 04 September 2025 00:51:55 +0000 (0:00:00.662) 0:06:59.345 **** 2025-09-04 00:56:12.471349 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471354 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471359 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471363 | orchestrator | 2025-09-04 00:56:12.471368 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.471376 | orchestrator | Thursday 04 September 2025 00:51:56 +0000 (0:00:00.335) 0:06:59.680 **** 2025-09-04 00:56:12.471380 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471385 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471390 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471394 | orchestrator | 2025-09-04 00:56:12.471399 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.471404 | orchestrator | Thursday 04 September 2025 00:51:56 +0000 (0:00:00.297) 0:06:59.977 **** 2025-09-04 00:56:12.471408 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471413 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471418 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471422 | orchestrator | 2025-09-04 00:56:12.471427 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.471432 | orchestrator | Thursday 04 September 2025 00:51:56 +0000 (0:00:00.285) 0:07:00.263 **** 2025-09-04 00:56:12.471436 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471441 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471446 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471450 | orchestrator | 2025-09-04 00:56:12.471455 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.471460 | orchestrator | Thursday 04 September 2025 00:51:57 +0000 (0:00:00.628) 0:07:00.892 **** 2025-09-04 00:56:12.471465 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471469 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471474 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471479 | orchestrator | 2025-09-04 00:56:12.471483 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-04 00:56:12.471488 | orchestrator | Thursday 04 September 2025 00:51:57 +0000 (0:00:00.512) 0:07:01.404 **** 2025-09-04 00:56:12.471493 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471497 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471502 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471507 | orchestrator | 2025-09-04 00:56:12.471512 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-04 00:56:12.471516 | orchestrator | Thursday 04 September 2025 00:51:58 +0000 (0:00:00.304) 0:07:01.708 **** 2025-09-04 00:56:12.471523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:56:12.471528 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:56:12.471533 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:56:12.471538 | orchestrator | 2025-09-04 00:56:12.471542 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-04 00:56:12.471547 | orchestrator | Thursday 04 September 2025 00:51:59 +0000 (0:00:00.920) 0:07:02.628 **** 2025-09-04 00:56:12.471552 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.471556 | orchestrator | 2025-09-04 00:56:12.471561 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-04 00:56:12.471566 | orchestrator | Thursday 04 September 2025 00:51:59 +0000 (0:00:00.792) 0:07:03.420 **** 2025-09-04 00:56:12.471570 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471575 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471580 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471584 | orchestrator | 2025-09-04 00:56:12.471589 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-04 00:56:12.471594 | orchestrator | Thursday 04 September 2025 00:52:00 +0000 (0:00:00.316) 0:07:03.737 **** 2025-09-04 00:56:12.471598 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471603 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471608 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471612 | orchestrator | 2025-09-04 00:56:12.471617 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-04 00:56:12.471624 | orchestrator | Thursday 04 September 2025 00:52:00 +0000 (0:00:00.285) 0:07:04.023 **** 2025-09-04 00:56:12.471629 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471634 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471638 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471643 | orchestrator | 2025-09-04 00:56:12.471648 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-04 00:56:12.471652 | orchestrator | Thursday 04 September 2025 00:52:01 +0000 (0:00:00.902) 0:07:04.926 **** 2025-09-04 00:56:12.471657 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.471662 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.471666 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.471671 | orchestrator | 2025-09-04 00:56:12.471676 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-04 00:56:12.471680 | orchestrator | Thursday 04 September 2025 00:52:01 +0000 (0:00:00.347) 0:07:05.273 **** 2025-09-04 00:56:12.471685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-04 00:56:12.471690 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-04 00:56:12.471695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-04 00:56:12.471699 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-04 00:56:12.471704 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-04 00:56:12.471713 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-04 00:56:12.471718 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-04 00:56:12.471722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-04 00:56:12.471727 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-04 00:56:12.471732 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-04 00:56:12.471736 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-04 00:56:12.471741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-04 00:56:12.471746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-04 00:56:12.471750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-04 00:56:12.471755 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-04 00:56:12.471760 | orchestrator | 2025-09-04 00:56:12.471765 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-04 00:56:12.471769 | orchestrator | Thursday 04 September 2025 00:52:03 +0000 (0:00:02.122) 0:07:07.396 **** 2025-09-04 00:56:12.471774 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.471778 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.471783 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.471788 | orchestrator | 2025-09-04 00:56:12.471792 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-04 00:56:12.471797 | orchestrator | Thursday 04 September 2025 00:52:04 +0000 (0:00:00.315) 0:07:07.711 **** 2025-09-04 00:56:12.471802 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.471807 | orchestrator | 2025-09-04 00:56:12.471811 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-04 00:56:12.471816 | orchestrator | Thursday 04 September 2025 00:52:04 +0000 (0:00:00.779) 0:07:08.491 **** 2025-09-04 00:56:12.471821 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-04 00:56:12.471825 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-04 00:56:12.471833 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-04 00:56:12.471839 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-04 00:56:12.471844 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-04 00:56:12.471849 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-04 00:56:12.471854 | orchestrator | 2025-09-04 00:56:12.471858 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-04 00:56:12.471863 | orchestrator | Thursday 04 September 2025 00:52:05 +0000 (0:00:00.939) 0:07:09.430 **** 2025-09-04 00:56:12.471868 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.471873 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.471877 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.471882 | orchestrator | 2025-09-04 00:56:12.471887 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-04 00:56:12.471891 | orchestrator | Thursday 04 September 2025 00:52:08 +0000 (0:00:02.137) 0:07:11.568 **** 2025-09-04 00:56:12.471896 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 00:56:12.471901 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.471905 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.471910 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 00:56:12.471915 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-04 00:56:12.471920 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.471924 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 00:56:12.471929 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-04 00:56:12.471933 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.471938 | orchestrator | 2025-09-04 00:56:12.471943 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-04 00:56:12.471947 | orchestrator | Thursday 04 September 2025 00:52:09 +0000 (0:00:01.473) 0:07:13.042 **** 2025-09-04 00:56:12.471952 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.471957 | orchestrator | 2025-09-04 00:56:12.471962 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-04 00:56:12.471966 | orchestrator | Thursday 04 September 2025 00:52:11 +0000 (0:00:02.035) 0:07:15.078 **** 2025-09-04 00:56:12.471971 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.471976 | orchestrator | 2025-09-04 00:56:12.471981 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-04 00:56:12.471985 | orchestrator | Thursday 04 September 2025 00:52:12 +0000 (0:00:00.551) 0:07:15.629 **** 2025-09-04 00:56:12.471990 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e3ae8186-8e2b-5882-a4eb-fc7766e33302', 'data_vg': 'ceph-e3ae8186-8e2b-5882-a4eb-fc7766e33302'}) 2025-09-04 00:56:12.471995 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4210a630-ee04-544c-a974-f1a5ed1af07b', 'data_vg': 'ceph-4210a630-ee04-544c-a974-f1a5ed1af07b'}) 2025-09-04 00:56:12.472003 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-23145802-20b8-57b1-85d3-97c3024bf9a4', 'data_vg': 'ceph-23145802-20b8-57b1-85d3-97c3024bf9a4'}) 2025-09-04 00:56:12.472007 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7cbfcabc-5503-586a-bc65-b738adbafdd0', 'data_vg': 'ceph-7cbfcabc-5503-586a-bc65-b738adbafdd0'}) 2025-09-04 00:56:12.472012 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f811dc1d-ef4c-5806-9269-19449158751d', 'data_vg': 'ceph-f811dc1d-ef4c-5806-9269-19449158751d'}) 2025-09-04 00:56:12.472017 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b', 'data_vg': 'ceph-f7bb41d6-3a07-594b-a5ed-81bcd3eca58b'}) 2025-09-04 00:56:12.472022 | orchestrator | 2025-09-04 00:56:12.472029 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-04 00:56:12.472034 | orchestrator | Thursday 04 September 2025 00:52:53 +0000 (0:00:41.484) 0:07:57.114 **** 2025-09-04 00:56:12.472039 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472043 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472048 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472053 | orchestrator | 2025-09-04 00:56:12.472057 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-04 00:56:12.472062 | orchestrator | Thursday 04 September 2025 00:52:54 +0000 (0:00:00.550) 0:07:57.665 **** 2025-09-04 00:56:12.472088 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.472094 | orchestrator | 2025-09-04 00:56:12.472099 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-04 00:56:12.472103 | orchestrator | Thursday 04 September 2025 00:52:54 +0000 (0:00:00.512) 0:07:58.177 **** 2025-09-04 00:56:12.472108 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.472113 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.472118 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.472122 | orchestrator | 2025-09-04 00:56:12.472127 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-04 00:56:12.472132 | orchestrator | Thursday 04 September 2025 00:52:55 +0000 (0:00:00.713) 0:07:58.891 **** 2025-09-04 00:56:12.472137 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.472142 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.472146 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.472151 | orchestrator | 2025-09-04 00:56:12.472156 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-04 00:56:12.472161 | orchestrator | Thursday 04 September 2025 00:52:58 +0000 (0:00:02.805) 0:08:01.696 **** 2025-09-04 00:56:12.472168 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.472173 | orchestrator | 2025-09-04 00:56:12.472177 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-04 00:56:12.472182 | orchestrator | Thursday 04 September 2025 00:52:58 +0000 (0:00:00.543) 0:08:02.240 **** 2025-09-04 00:56:12.472187 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.472192 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.472196 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.472201 | orchestrator | 2025-09-04 00:56:12.472206 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-04 00:56:12.472211 | orchestrator | Thursday 04 September 2025 00:52:59 +0000 (0:00:01.140) 0:08:03.381 **** 2025-09-04 00:56:12.472215 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.472220 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.472225 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.472229 | orchestrator | 2025-09-04 00:56:12.472234 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-04 00:56:12.472239 | orchestrator | Thursday 04 September 2025 00:53:01 +0000 (0:00:01.521) 0:08:04.903 **** 2025-09-04 00:56:12.472244 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.472248 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.472253 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.472258 | orchestrator | 2025-09-04 00:56:12.472262 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-04 00:56:12.472267 | orchestrator | Thursday 04 September 2025 00:53:03 +0000 (0:00:02.616) 0:08:07.519 **** 2025-09-04 00:56:12.472272 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472277 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472281 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472286 | orchestrator | 2025-09-04 00:56:12.472291 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-04 00:56:12.472295 | orchestrator | Thursday 04 September 2025 00:53:04 +0000 (0:00:00.367) 0:08:07.887 **** 2025-09-04 00:56:12.472303 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472308 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472312 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472317 | orchestrator | 2025-09-04 00:56:12.472322 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-04 00:56:12.472327 | orchestrator | Thursday 04 September 2025 00:53:04 +0000 (0:00:00.329) 0:08:08.217 **** 2025-09-04 00:56:12.472331 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-04 00:56:12.472336 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-04 00:56:12.472341 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-09-04 00:56:12.472346 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-04 00:56:12.472350 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-04 00:56:12.472355 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-04 00:56:12.472360 | orchestrator | 2025-09-04 00:56:12.472364 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-04 00:56:12.472369 | orchestrator | Thursday 04 September 2025 00:53:06 +0000 (0:00:01.368) 0:08:09.585 **** 2025-09-04 00:56:12.472374 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-04 00:56:12.472379 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-04 00:56:12.472383 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-04 00:56:12.472388 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-04 00:56:12.472393 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-04 00:56:12.472400 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-04 00:56:12.472406 | orchestrator | 2025-09-04 00:56:12.472410 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-04 00:56:12.472415 | orchestrator | Thursday 04 September 2025 00:53:08 +0000 (0:00:02.147) 0:08:11.732 **** 2025-09-04 00:56:12.472420 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-04 00:56:12.472424 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-04 00:56:12.472429 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-04 00:56:12.472434 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-09-04 00:56:12.472439 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-04 00:56:12.472443 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-04 00:56:12.472448 | orchestrator | 2025-09-04 00:56:12.472453 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-04 00:56:12.472457 | orchestrator | Thursday 04 September 2025 00:53:11 +0000 (0:00:03.578) 0:08:15.311 **** 2025-09-04 00:56:12.472462 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472467 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.472476 | orchestrator | 2025-09-04 00:56:12.472481 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-04 00:56:12.472486 | orchestrator | Thursday 04 September 2025 00:53:14 +0000 (0:00:03.053) 0:08:18.365 **** 2025-09-04 00:56:12.472490 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472495 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472500 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-04 00:56:12.472505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.472509 | orchestrator | 2025-09-04 00:56:12.472514 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-04 00:56:12.472519 | orchestrator | Thursday 04 September 2025 00:53:27 +0000 (0:00:12.875) 0:08:31.240 **** 2025-09-04 00:56:12.472523 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472528 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472532 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472537 | orchestrator | 2025-09-04 00:56:12.472541 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.472546 | orchestrator | Thursday 04 September 2025 00:53:28 +0000 (0:00:00.830) 0:08:32.071 **** 2025-09-04 00:56:12.472553 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472557 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472562 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472566 | orchestrator | 2025-09-04 00:56:12.472571 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-04 00:56:12.472577 | orchestrator | Thursday 04 September 2025 00:53:29 +0000 (0:00:00.608) 0:08:32.679 **** 2025-09-04 00:56:12.472582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.472586 | orchestrator | 2025-09-04 00:56:12.472591 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-04 00:56:12.472595 | orchestrator | Thursday 04 September 2025 00:53:29 +0000 (0:00:00.511) 0:08:33.190 **** 2025-09-04 00:56:12.472600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.472604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.472609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.472613 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472618 | orchestrator | 2025-09-04 00:56:12.472622 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-04 00:56:12.472627 | orchestrator | Thursday 04 September 2025 00:53:30 +0000 (0:00:00.392) 0:08:33.582 **** 2025-09-04 00:56:12.472631 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472636 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472640 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472644 | orchestrator | 2025-09-04 00:56:12.472649 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-04 00:56:12.472653 | orchestrator | Thursday 04 September 2025 00:53:30 +0000 (0:00:00.305) 0:08:33.888 **** 2025-09-04 00:56:12.472658 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472662 | orchestrator | 2025-09-04 00:56:12.472667 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-04 00:56:12.472671 | orchestrator | Thursday 04 September 2025 00:53:31 +0000 (0:00:00.729) 0:08:34.617 **** 2025-09-04 00:56:12.472675 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472680 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472684 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472689 | orchestrator | 2025-09-04 00:56:12.472693 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-04 00:56:12.472698 | orchestrator | Thursday 04 September 2025 00:53:31 +0000 (0:00:00.307) 0:08:34.925 **** 2025-09-04 00:56:12.472702 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472707 | orchestrator | 2025-09-04 00:56:12.472711 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-04 00:56:12.472715 | orchestrator | Thursday 04 September 2025 00:53:31 +0000 (0:00:00.243) 0:08:35.168 **** 2025-09-04 00:56:12.472720 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472724 | orchestrator | 2025-09-04 00:56:12.472729 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-04 00:56:12.472733 | orchestrator | Thursday 04 September 2025 00:53:31 +0000 (0:00:00.228) 0:08:35.397 **** 2025-09-04 00:56:12.472738 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472742 | orchestrator | 2025-09-04 00:56:12.472747 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-04 00:56:12.472751 | orchestrator | Thursday 04 September 2025 00:53:31 +0000 (0:00:00.130) 0:08:35.527 **** 2025-09-04 00:56:12.472756 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472760 | orchestrator | 2025-09-04 00:56:12.472767 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-04 00:56:12.472772 | orchestrator | Thursday 04 September 2025 00:53:32 +0000 (0:00:00.231) 0:08:35.759 **** 2025-09-04 00:56:12.472776 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472781 | orchestrator | 2025-09-04 00:56:12.472785 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-04 00:56:12.472813 | orchestrator | Thursday 04 September 2025 00:53:32 +0000 (0:00:00.215) 0:08:35.975 **** 2025-09-04 00:56:12.472818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.472822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.472827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.472831 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472836 | orchestrator | 2025-09-04 00:56:12.472840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-04 00:56:12.472845 | orchestrator | Thursday 04 September 2025 00:53:32 +0000 (0:00:00.372) 0:08:36.348 **** 2025-09-04 00:56:12.472849 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472853 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472858 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472862 | orchestrator | 2025-09-04 00:56:12.472867 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-04 00:56:12.472871 | orchestrator | Thursday 04 September 2025 00:53:33 +0000 (0:00:00.564) 0:08:36.912 **** 2025-09-04 00:56:12.472876 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472880 | orchestrator | 2025-09-04 00:56:12.472884 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-04 00:56:12.472889 | orchestrator | Thursday 04 September 2025 00:53:33 +0000 (0:00:00.237) 0:08:37.150 **** 2025-09-04 00:56:12.472893 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472898 | orchestrator | 2025-09-04 00:56:12.472902 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-04 00:56:12.472907 | orchestrator | 2025-09-04 00:56:12.472911 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.472915 | orchestrator | Thursday 04 September 2025 00:53:34 +0000 (0:00:00.728) 0:08:37.878 **** 2025-09-04 00:56:12.472920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-1, testbed-node-0, testbed-node-2 2025-09-04 00:56:12.472925 | orchestrator | 2025-09-04 00:56:12.472929 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.472934 | orchestrator | Thursday 04 September 2025 00:53:35 +0000 (0:00:01.303) 0:08:39.182 **** 2025-09-04 00:56:12.472943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.472947 | orchestrator | 2025-09-04 00:56:12.472952 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.472956 | orchestrator | Thursday 04 September 2025 00:53:36 +0000 (0:00:01.232) 0:08:40.414 **** 2025-09-04 00:56:12.472961 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.472965 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.472970 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.472974 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.472979 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.472983 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.472988 | orchestrator | 2025-09-04 00:56:12.472992 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.472996 | orchestrator | Thursday 04 September 2025 00:53:38 +0000 (0:00:01.282) 0:08:41.697 **** 2025-09-04 00:56:12.473001 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473005 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473010 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473014 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473019 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473023 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473028 | orchestrator | 2025-09-04 00:56:12.473032 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.473039 | orchestrator | Thursday 04 September 2025 00:53:38 +0000 (0:00:00.693) 0:08:42.391 **** 2025-09-04 00:56:12.473044 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473048 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473053 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473057 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473061 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473073 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473078 | orchestrator | 2025-09-04 00:56:12.473083 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.473087 | orchestrator | Thursday 04 September 2025 00:53:39 +0000 (0:00:00.959) 0:08:43.350 **** 2025-09-04 00:56:12.473092 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473096 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473100 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473105 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473109 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473114 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473118 | orchestrator | 2025-09-04 00:56:12.473123 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.473127 | orchestrator | Thursday 04 September 2025 00:53:40 +0000 (0:00:00.762) 0:08:44.113 **** 2025-09-04 00:56:12.473131 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473136 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473140 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473145 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473149 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473154 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473158 | orchestrator | 2025-09-04 00:56:12.473163 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.473167 | orchestrator | Thursday 04 September 2025 00:53:41 +0000 (0:00:01.256) 0:08:45.369 **** 2025-09-04 00:56:12.473172 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473176 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473183 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473188 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473192 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473197 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473201 | orchestrator | 2025-09-04 00:56:12.473206 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.473210 | orchestrator | Thursday 04 September 2025 00:53:42 +0000 (0:00:00.598) 0:08:45.967 **** 2025-09-04 00:56:12.473215 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473219 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473224 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473228 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473232 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473237 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473241 | orchestrator | 2025-09-04 00:56:12.473246 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.473250 | orchestrator | Thursday 04 September 2025 00:53:42 +0000 (0:00:00.543) 0:08:46.511 **** 2025-09-04 00:56:12.473255 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473259 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473264 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473268 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473272 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473277 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473281 | orchestrator | 2025-09-04 00:56:12.473286 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.473290 | orchestrator | Thursday 04 September 2025 00:53:44 +0000 (0:00:01.298) 0:08:47.810 **** 2025-09-04 00:56:12.473295 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473299 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473304 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473312 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473316 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473320 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473325 | orchestrator | 2025-09-04 00:56:12.473329 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.473334 | orchestrator | Thursday 04 September 2025 00:53:45 +0000 (0:00:00.975) 0:08:48.786 **** 2025-09-04 00:56:12.473338 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473342 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473347 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473351 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473356 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473360 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473365 | orchestrator | 2025-09-04 00:56:12.473369 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.473374 | orchestrator | Thursday 04 September 2025 00:53:46 +0000 (0:00:00.821) 0:08:49.607 **** 2025-09-04 00:56:12.473378 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473383 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473389 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473394 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473398 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473403 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473407 | orchestrator | 2025-09-04 00:56:12.473411 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.473416 | orchestrator | Thursday 04 September 2025 00:53:46 +0000 (0:00:00.577) 0:08:50.184 **** 2025-09-04 00:56:12.473420 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473425 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473429 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473434 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473438 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473442 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473447 | orchestrator | 2025-09-04 00:56:12.473451 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.473456 | orchestrator | Thursday 04 September 2025 00:53:47 +0000 (0:00:00.906) 0:08:51.091 **** 2025-09-04 00:56:12.473460 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473464 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473469 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473473 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473478 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473482 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473487 | orchestrator | 2025-09-04 00:56:12.473491 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.473496 | orchestrator | Thursday 04 September 2025 00:53:48 +0000 (0:00:00.624) 0:08:51.715 **** 2025-09-04 00:56:12.473500 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473504 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473509 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473513 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473518 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473522 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473527 | orchestrator | 2025-09-04 00:56:12.473531 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.473536 | orchestrator | Thursday 04 September 2025 00:53:48 +0000 (0:00:00.831) 0:08:52.547 **** 2025-09-04 00:56:12.473540 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473545 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473549 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473553 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473558 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473562 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473567 | orchestrator | 2025-09-04 00:56:12.473571 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.473578 | orchestrator | Thursday 04 September 2025 00:53:49 +0000 (0:00:00.594) 0:08:53.141 **** 2025-09-04 00:56:12.473583 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473587 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473592 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473596 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:56:12.473600 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:56:12.473605 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:56:12.473609 | orchestrator | 2025-09-04 00:56:12.473614 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.473618 | orchestrator | Thursday 04 September 2025 00:53:50 +0000 (0:00:00.819) 0:08:53.961 **** 2025-09-04 00:56:12.473623 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.473630 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.473634 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.473639 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473643 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473648 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473652 | orchestrator | 2025-09-04 00:56:12.473657 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.473661 | orchestrator | Thursday 04 September 2025 00:53:50 +0000 (0:00:00.584) 0:08:54.545 **** 2025-09-04 00:56:12.473666 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473670 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473674 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473679 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473683 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473688 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473692 | orchestrator | 2025-09-04 00:56:12.473697 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.473701 | orchestrator | Thursday 04 September 2025 00:53:51 +0000 (0:00:00.859) 0:08:55.405 **** 2025-09-04 00:56:12.473706 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473710 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473714 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473719 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473723 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.473728 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.473732 | orchestrator | 2025-09-04 00:56:12.473736 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-04 00:56:12.473741 | orchestrator | Thursday 04 September 2025 00:53:53 +0000 (0:00:01.262) 0:08:56.668 **** 2025-09-04 00:56:12.473745 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.473750 | orchestrator | 2025-09-04 00:56:12.473754 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-04 00:56:12.473759 | orchestrator | Thursday 04 September 2025 00:53:57 +0000 (0:00:03.910) 0:09:00.578 **** 2025-09-04 00:56:12.473763 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.473768 | orchestrator | 2025-09-04 00:56:12.473772 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-04 00:56:12.473777 | orchestrator | Thursday 04 September 2025 00:53:59 +0000 (0:00:01.985) 0:09:02.564 **** 2025-09-04 00:56:12.473781 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.473786 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.473790 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.473794 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.473799 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.473803 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.473808 | orchestrator | 2025-09-04 00:56:12.473812 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-04 00:56:12.473819 | orchestrator | Thursday 04 September 2025 00:54:00 +0000 (0:00:01.591) 0:09:04.156 **** 2025-09-04 00:56:12.473823 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.473832 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.473836 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.473841 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.473845 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.473849 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.473854 | orchestrator | 2025-09-04 00:56:12.473858 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-04 00:56:12.473863 | orchestrator | Thursday 04 September 2025 00:54:01 +0000 (0:00:01.232) 0:09:05.388 **** 2025-09-04 00:56:12.473867 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.473872 | orchestrator | 2025-09-04 00:56:12.473876 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-04 00:56:12.473881 | orchestrator | Thursday 04 September 2025 00:54:03 +0000 (0:00:01.217) 0:09:06.606 **** 2025-09-04 00:56:12.473885 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.473890 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.473894 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.473898 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.473903 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.473907 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.473912 | orchestrator | 2025-09-04 00:56:12.473916 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-04 00:56:12.473921 | orchestrator | Thursday 04 September 2025 00:54:04 +0000 (0:00:01.619) 0:09:08.225 **** 2025-09-04 00:56:12.473925 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.473930 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.473934 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.473938 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.473943 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.473947 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.473952 | orchestrator | 2025-09-04 00:56:12.473956 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-04 00:56:12.473960 | orchestrator | Thursday 04 September 2025 00:54:08 +0000 (0:00:03.589) 0:09:11.815 **** 2025-09-04 00:56:12.473965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:56:12.473970 | orchestrator | 2025-09-04 00:56:12.473974 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-04 00:56:12.473979 | orchestrator | Thursday 04 September 2025 00:54:09 +0000 (0:00:01.255) 0:09:13.070 **** 2025-09-04 00:56:12.473983 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.473987 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.473992 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.473996 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.474001 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.474005 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.474009 | orchestrator | 2025-09-04 00:56:12.474036 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-04 00:56:12.474042 | orchestrator | Thursday 04 September 2025 00:54:10 +0000 (0:00:00.645) 0:09:13.716 **** 2025-09-04 00:56:12.474046 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.474054 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.474059 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.474063 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:56:12.474075 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:56:12.474080 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:56:12.474084 | orchestrator | 2025-09-04 00:56:12.474089 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-04 00:56:12.474093 | orchestrator | Thursday 04 September 2025 00:54:12 +0000 (0:00:02.817) 0:09:16.533 **** 2025-09-04 00:56:12.474098 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474106 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474111 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474115 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:56:12.474120 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:56:12.474124 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:56:12.474128 | orchestrator | 2025-09-04 00:56:12.474133 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-04 00:56:12.474138 | orchestrator | 2025-09-04 00:56:12.474142 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.474147 | orchestrator | Thursday 04 September 2025 00:54:13 +0000 (0:00:00.818) 0:09:17.352 **** 2025-09-04 00:56:12.474151 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.474156 | orchestrator | 2025-09-04 00:56:12.474160 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.474165 | orchestrator | Thursday 04 September 2025 00:54:14 +0000 (0:00:00.809) 0:09:18.162 **** 2025-09-04 00:56:12.474169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.474174 | orchestrator | 2025-09-04 00:56:12.474179 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.474183 | orchestrator | Thursday 04 September 2025 00:54:15 +0000 (0:00:00.491) 0:09:18.653 **** 2025-09-04 00:56:12.474188 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474192 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474197 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474201 | orchestrator | 2025-09-04 00:56:12.474206 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.474210 | orchestrator | Thursday 04 September 2025 00:54:15 +0000 (0:00:00.550) 0:09:19.204 **** 2025-09-04 00:56:12.474215 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474219 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474224 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474228 | orchestrator | 2025-09-04 00:56:12.474233 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.474239 | orchestrator | Thursday 04 September 2025 00:54:16 +0000 (0:00:00.781) 0:09:19.986 **** 2025-09-04 00:56:12.474244 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474248 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474253 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474257 | orchestrator | 2025-09-04 00:56:12.474262 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.474267 | orchestrator | Thursday 04 September 2025 00:54:17 +0000 (0:00:00.752) 0:09:20.738 **** 2025-09-04 00:56:12.474271 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474275 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474280 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474284 | orchestrator | 2025-09-04 00:56:12.474289 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.474293 | orchestrator | Thursday 04 September 2025 00:54:17 +0000 (0:00:00.783) 0:09:21.522 **** 2025-09-04 00:56:12.474298 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474303 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474307 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474311 | orchestrator | 2025-09-04 00:56:12.474316 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.474321 | orchestrator | Thursday 04 September 2025 00:54:18 +0000 (0:00:00.675) 0:09:22.197 **** 2025-09-04 00:56:12.474325 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474330 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474334 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474339 | orchestrator | 2025-09-04 00:56:12.474343 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.474348 | orchestrator | Thursday 04 September 2025 00:54:18 +0000 (0:00:00.312) 0:09:22.509 **** 2025-09-04 00:56:12.474355 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474359 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474364 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474368 | orchestrator | 2025-09-04 00:56:12.474373 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.474377 | orchestrator | Thursday 04 September 2025 00:54:19 +0000 (0:00:00.318) 0:09:22.828 **** 2025-09-04 00:56:12.474382 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474386 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474391 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474395 | orchestrator | 2025-09-04 00:56:12.474400 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.474404 | orchestrator | Thursday 04 September 2025 00:54:19 +0000 (0:00:00.684) 0:09:23.512 **** 2025-09-04 00:56:12.474409 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474413 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474418 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474422 | orchestrator | 2025-09-04 00:56:12.474426 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.474431 | orchestrator | Thursday 04 September 2025 00:54:21 +0000 (0:00:01.048) 0:09:24.561 **** 2025-09-04 00:56:12.474436 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474440 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474445 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474449 | orchestrator | 2025-09-04 00:56:12.474453 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.474458 | orchestrator | Thursday 04 September 2025 00:54:21 +0000 (0:00:00.320) 0:09:24.881 **** 2025-09-04 00:56:12.474462 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474469 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474474 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474478 | orchestrator | 2025-09-04 00:56:12.474483 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.474487 | orchestrator | Thursday 04 September 2025 00:54:21 +0000 (0:00:00.302) 0:09:25.184 **** 2025-09-04 00:56:12.474492 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474496 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474501 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474505 | orchestrator | 2025-09-04 00:56:12.474510 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.474514 | orchestrator | Thursday 04 September 2025 00:54:21 +0000 (0:00:00.352) 0:09:25.536 **** 2025-09-04 00:56:12.474519 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474523 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474528 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474532 | orchestrator | 2025-09-04 00:56:12.474537 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.474541 | orchestrator | Thursday 04 September 2025 00:54:22 +0000 (0:00:00.717) 0:09:26.254 **** 2025-09-04 00:56:12.474545 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474550 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474554 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474559 | orchestrator | 2025-09-04 00:56:12.474563 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.474568 | orchestrator | Thursday 04 September 2025 00:54:23 +0000 (0:00:00.333) 0:09:26.587 **** 2025-09-04 00:56:12.474572 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474577 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474581 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474586 | orchestrator | 2025-09-04 00:56:12.474590 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.474595 | orchestrator | Thursday 04 September 2025 00:54:23 +0000 (0:00:00.340) 0:09:26.928 **** 2025-09-04 00:56:12.474599 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474607 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474611 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474616 | orchestrator | 2025-09-04 00:56:12.474620 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.474625 | orchestrator | Thursday 04 September 2025 00:54:23 +0000 (0:00:00.324) 0:09:27.252 **** 2025-09-04 00:56:12.474629 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474633 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474638 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474642 | orchestrator | 2025-09-04 00:56:12.474647 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.474651 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.574) 0:09:27.826 **** 2025-09-04 00:56:12.474658 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474663 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474667 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474672 | orchestrator | 2025-09-04 00:56:12.474676 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.474681 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.314) 0:09:28.141 **** 2025-09-04 00:56:12.474685 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.474689 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.474694 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.474698 | orchestrator | 2025-09-04 00:56:12.474703 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-04 00:56:12.474707 | orchestrator | Thursday 04 September 2025 00:54:25 +0000 (0:00:00.525) 0:09:28.667 **** 2025-09-04 00:56:12.474712 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.474716 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.474721 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-04 00:56:12.474725 | orchestrator | 2025-09-04 00:56:12.474730 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-04 00:56:12.474734 | orchestrator | Thursday 04 September 2025 00:54:25 +0000 (0:00:00.723) 0:09:29.390 **** 2025-09-04 00:56:12.474739 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.474743 | orchestrator | 2025-09-04 00:56:12.474748 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-04 00:56:12.474752 | orchestrator | Thursday 04 September 2025 00:54:28 +0000 (0:00:02.206) 0:09:31.596 **** 2025-09-04 00:56:12.474757 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-04 00:56:12.474763 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.474767 | orchestrator | 2025-09-04 00:56:12.474772 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-04 00:56:12.474776 | orchestrator | Thursday 04 September 2025 00:54:28 +0000 (0:00:00.206) 0:09:31.802 **** 2025-09-04 00:56:12.474781 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:56:12.474789 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:56:12.474794 | orchestrator | 2025-09-04 00:56:12.474798 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-04 00:56:12.474803 | orchestrator | Thursday 04 September 2025 00:54:36 +0000 (0:00:07.884) 0:09:39.687 **** 2025-09-04 00:56:12.474810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-04 00:56:12.474817 | orchestrator | 2025-09-04 00:56:12.474822 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-04 00:56:12.474826 | orchestrator | Thursday 04 September 2025 00:54:40 +0000 (0:00:03.975) 0:09:43.663 **** 2025-09-04 00:56:12.474831 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.474835 | orchestrator | 2025-09-04 00:56:12.474840 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-04 00:56:12.474844 | orchestrator | Thursday 04 September 2025 00:54:40 +0000 (0:00:00.820) 0:09:44.483 **** 2025-09-04 00:56:12.474849 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-04 00:56:12.474853 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-04 00:56:12.474858 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-04 00:56:12.474862 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-04 00:56:12.474866 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-04 00:56:12.474871 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-04 00:56:12.474875 | orchestrator | 2025-09-04 00:56:12.474880 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-04 00:56:12.474884 | orchestrator | Thursday 04 September 2025 00:54:42 +0000 (0:00:01.090) 0:09:45.573 **** 2025-09-04 00:56:12.474889 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.474893 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.474898 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.474902 | orchestrator | 2025-09-04 00:56:12.474907 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-04 00:56:12.474911 | orchestrator | Thursday 04 September 2025 00:54:44 +0000 (0:00:02.167) 0:09:47.741 **** 2025-09-04 00:56:12.474916 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 00:56:12.474920 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.474925 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.474929 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 00:56:12.474934 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-04 00:56:12.474938 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.474942 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 00:56:12.474949 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-04 00:56:12.474954 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.474958 | orchestrator | 2025-09-04 00:56:12.474963 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-04 00:56:12.474967 | orchestrator | Thursday 04 September 2025 00:54:45 +0000 (0:00:01.336) 0:09:49.077 **** 2025-09-04 00:56:12.474972 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.474976 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.474981 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.474985 | orchestrator | 2025-09-04 00:56:12.474990 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-04 00:56:12.474994 | orchestrator | Thursday 04 September 2025 00:54:48 +0000 (0:00:02.832) 0:09:51.910 **** 2025-09-04 00:56:12.474999 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475003 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475008 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475012 | orchestrator | 2025-09-04 00:56:12.475017 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-04 00:56:12.475021 | orchestrator | Thursday 04 September 2025 00:54:48 +0000 (0:00:00.346) 0:09:52.257 **** 2025-09-04 00:56:12.475026 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475033 | orchestrator | 2025-09-04 00:56:12.475037 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-04 00:56:12.475042 | orchestrator | Thursday 04 September 2025 00:54:49 +0000 (0:00:00.537) 0:09:52.794 **** 2025-09-04 00:56:12.475046 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475051 | orchestrator | 2025-09-04 00:56:12.475055 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-04 00:56:12.475060 | orchestrator | Thursday 04 September 2025 00:54:50 +0000 (0:00:00.761) 0:09:53.555 **** 2025-09-04 00:56:12.475064 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475075 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475080 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475085 | orchestrator | 2025-09-04 00:56:12.475089 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-04 00:56:12.475093 | orchestrator | Thursday 04 September 2025 00:54:51 +0000 (0:00:01.221) 0:09:54.777 **** 2025-09-04 00:56:12.475098 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475102 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475107 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475112 | orchestrator | 2025-09-04 00:56:12.475116 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-04 00:56:12.475121 | orchestrator | Thursday 04 September 2025 00:54:52 +0000 (0:00:01.153) 0:09:55.930 **** 2025-09-04 00:56:12.475125 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475130 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475134 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475139 | orchestrator | 2025-09-04 00:56:12.475143 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-04 00:56:12.475148 | orchestrator | Thursday 04 September 2025 00:54:54 +0000 (0:00:01.877) 0:09:57.807 **** 2025-09-04 00:56:12.475152 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475159 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475164 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475168 | orchestrator | 2025-09-04 00:56:12.475173 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-04 00:56:12.475177 | orchestrator | Thursday 04 September 2025 00:54:56 +0000 (0:00:01.867) 0:09:59.675 **** 2025-09-04 00:56:12.475182 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475186 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475191 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475195 | orchestrator | 2025-09-04 00:56:12.475200 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.475204 | orchestrator | Thursday 04 September 2025 00:54:57 +0000 (0:00:01.432) 0:10:01.107 **** 2025-09-04 00:56:12.475209 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475213 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475218 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475222 | orchestrator | 2025-09-04 00:56:12.475226 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-04 00:56:12.475231 | orchestrator | Thursday 04 September 2025 00:54:58 +0000 (0:00:00.662) 0:10:01.770 **** 2025-09-04 00:56:12.475235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475240 | orchestrator | 2025-09-04 00:56:12.475244 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-04 00:56:12.475249 | orchestrator | Thursday 04 September 2025 00:54:58 +0000 (0:00:00.505) 0:10:02.276 **** 2025-09-04 00:56:12.475253 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475258 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475262 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475266 | orchestrator | 2025-09-04 00:56:12.475271 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-04 00:56:12.475275 | orchestrator | Thursday 04 September 2025 00:54:59 +0000 (0:00:00.290) 0:10:02.566 **** 2025-09-04 00:56:12.475284 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475289 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.475293 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.475297 | orchestrator | 2025-09-04 00:56:12.475302 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-04 00:56:12.475306 | orchestrator | Thursday 04 September 2025 00:55:00 +0000 (0:00:01.495) 0:10:04.062 **** 2025-09-04 00:56:12.475311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.475315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.475320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.475324 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475329 | orchestrator | 2025-09-04 00:56:12.475335 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-04 00:56:12.475340 | orchestrator | Thursday 04 September 2025 00:55:01 +0000 (0:00:00.595) 0:10:04.658 **** 2025-09-04 00:56:12.475344 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475349 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475353 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475358 | orchestrator | 2025-09-04 00:56:12.475362 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-04 00:56:12.475367 | orchestrator | 2025-09-04 00:56:12.475371 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-04 00:56:12.475376 | orchestrator | Thursday 04 September 2025 00:55:01 +0000 (0:00:00.554) 0:10:05.212 **** 2025-09-04 00:56:12.475380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475385 | orchestrator | 2025-09-04 00:56:12.475389 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-04 00:56:12.475394 | orchestrator | Thursday 04 September 2025 00:55:02 +0000 (0:00:00.722) 0:10:05.935 **** 2025-09-04 00:56:12.475398 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475403 | orchestrator | 2025-09-04 00:56:12.475407 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-04 00:56:12.475412 | orchestrator | Thursday 04 September 2025 00:55:02 +0000 (0:00:00.555) 0:10:06.490 **** 2025-09-04 00:56:12.475416 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475421 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475425 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475430 | orchestrator | 2025-09-04 00:56:12.475434 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-04 00:56:12.475438 | orchestrator | Thursday 04 September 2025 00:55:03 +0000 (0:00:00.527) 0:10:07.017 **** 2025-09-04 00:56:12.475443 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475447 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475452 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475456 | orchestrator | 2025-09-04 00:56:12.475461 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-04 00:56:12.475465 | orchestrator | Thursday 04 September 2025 00:55:04 +0000 (0:00:00.717) 0:10:07.735 **** 2025-09-04 00:56:12.475470 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475474 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475479 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475483 | orchestrator | 2025-09-04 00:56:12.475488 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-04 00:56:12.475492 | orchestrator | Thursday 04 September 2025 00:55:04 +0000 (0:00:00.727) 0:10:08.463 **** 2025-09-04 00:56:12.475497 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475501 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475506 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475510 | orchestrator | 2025-09-04 00:56:12.475515 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-04 00:56:12.475522 | orchestrator | Thursday 04 September 2025 00:55:05 +0000 (0:00:00.753) 0:10:09.216 **** 2025-09-04 00:56:12.475527 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475531 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475536 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475540 | orchestrator | 2025-09-04 00:56:12.475547 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-04 00:56:12.475552 | orchestrator | Thursday 04 September 2025 00:55:06 +0000 (0:00:00.596) 0:10:09.813 **** 2025-09-04 00:56:12.475556 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475561 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475565 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475570 | orchestrator | 2025-09-04 00:56:12.475574 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-04 00:56:12.475579 | orchestrator | Thursday 04 September 2025 00:55:06 +0000 (0:00:00.323) 0:10:10.136 **** 2025-09-04 00:56:12.475583 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475588 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475592 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475597 | orchestrator | 2025-09-04 00:56:12.475601 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-04 00:56:12.475606 | orchestrator | Thursday 04 September 2025 00:55:06 +0000 (0:00:00.326) 0:10:10.463 **** 2025-09-04 00:56:12.475610 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475615 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475619 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475624 | orchestrator | 2025-09-04 00:56:12.475628 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-04 00:56:12.475633 | orchestrator | Thursday 04 September 2025 00:55:07 +0000 (0:00:00.721) 0:10:11.184 **** 2025-09-04 00:56:12.475637 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475642 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475646 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475651 | orchestrator | 2025-09-04 00:56:12.475655 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-04 00:56:12.475659 | orchestrator | Thursday 04 September 2025 00:55:08 +0000 (0:00:00.975) 0:10:12.160 **** 2025-09-04 00:56:12.475664 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475669 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475673 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475677 | orchestrator | 2025-09-04 00:56:12.475682 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-04 00:56:12.475686 | orchestrator | Thursday 04 September 2025 00:55:08 +0000 (0:00:00.306) 0:10:12.467 **** 2025-09-04 00:56:12.475691 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475695 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475700 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475704 | orchestrator | 2025-09-04 00:56:12.475709 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-04 00:56:12.475713 | orchestrator | Thursday 04 September 2025 00:55:09 +0000 (0:00:00.310) 0:10:12.778 **** 2025-09-04 00:56:12.475718 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475722 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475727 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475731 | orchestrator | 2025-09-04 00:56:12.475738 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-04 00:56:12.475743 | orchestrator | Thursday 04 September 2025 00:55:09 +0000 (0:00:00.321) 0:10:13.099 **** 2025-09-04 00:56:12.475747 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475752 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475756 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475761 | orchestrator | 2025-09-04 00:56:12.475765 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-04 00:56:12.475770 | orchestrator | Thursday 04 September 2025 00:55:10 +0000 (0:00:00.567) 0:10:13.666 **** 2025-09-04 00:56:12.475777 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475781 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475786 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475790 | orchestrator | 2025-09-04 00:56:12.475794 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-04 00:56:12.475799 | orchestrator | Thursday 04 September 2025 00:55:10 +0000 (0:00:00.341) 0:10:14.008 **** 2025-09-04 00:56:12.475803 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475808 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475812 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475817 | orchestrator | 2025-09-04 00:56:12.475821 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-04 00:56:12.475826 | orchestrator | Thursday 04 September 2025 00:55:10 +0000 (0:00:00.316) 0:10:14.325 **** 2025-09-04 00:56:12.475830 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475835 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475839 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475844 | orchestrator | 2025-09-04 00:56:12.475848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-04 00:56:12.475853 | orchestrator | Thursday 04 September 2025 00:55:11 +0000 (0:00:00.296) 0:10:14.621 **** 2025-09-04 00:56:12.475857 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.475861 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.475866 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.475870 | orchestrator | 2025-09-04 00:56:12.475875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-04 00:56:12.475880 | orchestrator | Thursday 04 September 2025 00:55:11 +0000 (0:00:00.557) 0:10:15.178 **** 2025-09-04 00:56:12.475884 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475889 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475893 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475898 | orchestrator | 2025-09-04 00:56:12.475902 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-04 00:56:12.475906 | orchestrator | Thursday 04 September 2025 00:55:11 +0000 (0:00:00.338) 0:10:15.517 **** 2025-09-04 00:56:12.475911 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.475915 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.475920 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.475924 | orchestrator | 2025-09-04 00:56:12.475929 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-04 00:56:12.475933 | orchestrator | Thursday 04 September 2025 00:55:12 +0000 (0:00:00.529) 0:10:16.047 **** 2025-09-04 00:56:12.475938 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.475942 | orchestrator | 2025-09-04 00:56:12.475947 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-04 00:56:12.475954 | orchestrator | Thursday 04 September 2025 00:55:13 +0000 (0:00:00.744) 0:10:16.791 **** 2025-09-04 00:56:12.475958 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.475963 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.475967 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.475972 | orchestrator | 2025-09-04 00:56:12.475976 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-04 00:56:12.475981 | orchestrator | Thursday 04 September 2025 00:55:15 +0000 (0:00:01.995) 0:10:18.787 **** 2025-09-04 00:56:12.475985 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 00:56:12.475990 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-04 00:56:12.475994 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.475999 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 00:56:12.476003 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-04 00:56:12.476008 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.476015 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 00:56:12.476019 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-04 00:56:12.476024 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.476028 | orchestrator | 2025-09-04 00:56:12.476032 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-04 00:56:12.476037 | orchestrator | Thursday 04 September 2025 00:55:16 +0000 (0:00:01.125) 0:10:19.912 **** 2025-09-04 00:56:12.476041 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476046 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.476050 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.476055 | orchestrator | 2025-09-04 00:56:12.476059 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-04 00:56:12.476064 | orchestrator | Thursday 04 September 2025 00:55:16 +0000 (0:00:00.307) 0:10:20.220 **** 2025-09-04 00:56:12.476086 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.476091 | orchestrator | 2025-09-04 00:56:12.476096 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-04 00:56:12.476100 | orchestrator | Thursday 04 September 2025 00:55:17 +0000 (0:00:00.765) 0:10:20.986 **** 2025-09-04 00:56:12.476105 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476111 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476121 | orchestrator | 2025-09-04 00:56:12.476125 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-04 00:56:12.476130 | orchestrator | Thursday 04 September 2025 00:55:18 +0000 (0:00:00.757) 0:10:21.743 **** 2025-09-04 00:56:12.476134 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476139 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-04 00:56:12.476143 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476148 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-04 00:56:12.476152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476157 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-04 00:56:12.476162 | orchestrator | 2025-09-04 00:56:12.476166 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-04 00:56:12.476171 | orchestrator | Thursday 04 September 2025 00:55:22 +0000 (0:00:04.040) 0:10:25.784 **** 2025-09-04 00:56:12.476175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476180 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.476184 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476188 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.476192 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:56:12.476196 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:56:12.476200 | orchestrator | 2025-09-04 00:56:12.476204 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-04 00:56:12.476208 | orchestrator | Thursday 04 September 2025 00:55:24 +0000 (0:00:02.585) 0:10:28.369 **** 2025-09-04 00:56:12.476215 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 00:56:12.476220 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.476224 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 00:56:12.476228 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.476232 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 00:56:12.476236 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.476240 | orchestrator | 2025-09-04 00:56:12.476244 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-04 00:56:12.476251 | orchestrator | Thursday 04 September 2025 00:55:25 +0000 (0:00:01.143) 0:10:29.513 **** 2025-09-04 00:56:12.476255 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-04 00:56:12.476259 | orchestrator | 2025-09-04 00:56:12.476263 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-04 00:56:12.476267 | orchestrator | Thursday 04 September 2025 00:55:26 +0000 (0:00:00.249) 0:10:29.762 **** 2025-09-04 00:56:12.476271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476292 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476296 | orchestrator | 2025-09-04 00:56:12.476300 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-04 00:56:12.476304 | orchestrator | Thursday 04 September 2025 00:55:26 +0000 (0:00:00.590) 0:10:30.353 **** 2025-09-04 00:56:12.476308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-04 00:56:12.476330 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476335 | orchestrator | 2025-09-04 00:56:12.476339 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-04 00:56:12.476343 | orchestrator | Thursday 04 September 2025 00:55:27 +0000 (0:00:00.797) 0:10:31.150 **** 2025-09-04 00:56:12.476347 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-04 00:56:12.476351 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-04 00:56:12.476355 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-04 00:56:12.476359 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-04 00:56:12.476366 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-04 00:56:12.476370 | orchestrator | 2025-09-04 00:56:12.476374 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-04 00:56:12.476378 | orchestrator | Thursday 04 September 2025 00:55:58 +0000 (0:00:30.699) 0:11:01.850 **** 2025-09-04 00:56:12.476382 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476386 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.476390 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.476394 | orchestrator | 2025-09-04 00:56:12.476398 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-04 00:56:12.476402 | orchestrator | Thursday 04 September 2025 00:55:58 +0000 (0:00:00.563) 0:11:02.413 **** 2025-09-04 00:56:12.476406 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476411 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.476415 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.476419 | orchestrator | 2025-09-04 00:56:12.476423 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-04 00:56:12.476427 | orchestrator | Thursday 04 September 2025 00:55:59 +0000 (0:00:00.317) 0:11:02.731 **** 2025-09-04 00:56:12.476431 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.476435 | orchestrator | 2025-09-04 00:56:12.476439 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-04 00:56:12.476443 | orchestrator | Thursday 04 September 2025 00:55:59 +0000 (0:00:00.553) 0:11:03.284 **** 2025-09-04 00:56:12.476447 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.476451 | orchestrator | 2025-09-04 00:56:12.476457 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-04 00:56:12.476461 | orchestrator | Thursday 04 September 2025 00:56:00 +0000 (0:00:00.759) 0:11:04.044 **** 2025-09-04 00:56:12.476465 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.476469 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.476474 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.476478 | orchestrator | 2025-09-04 00:56:12.476482 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-04 00:56:12.476486 | orchestrator | Thursday 04 September 2025 00:56:01 +0000 (0:00:01.231) 0:11:05.275 **** 2025-09-04 00:56:12.476490 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.476494 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.476498 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.476502 | orchestrator | 2025-09-04 00:56:12.476506 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-04 00:56:12.476510 | orchestrator | Thursday 04 September 2025 00:56:02 +0000 (0:00:01.226) 0:11:06.501 **** 2025-09-04 00:56:12.476514 | orchestrator | changed: [testbed-node-4] 2025-09-04 00:56:12.476518 | orchestrator | changed: [testbed-node-3] 2025-09-04 00:56:12.476522 | orchestrator | changed: [testbed-node-5] 2025-09-04 00:56:12.476526 | orchestrator | 2025-09-04 00:56:12.476530 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-04 00:56:12.476534 | orchestrator | Thursday 04 September 2025 00:56:04 +0000 (0:00:01.796) 0:11:08.298 **** 2025-09-04 00:56:12.476538 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476542 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476547 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-04 00:56:12.476553 | orchestrator | 2025-09-04 00:56:12.476557 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-04 00:56:12.476561 | orchestrator | Thursday 04 September 2025 00:56:07 +0000 (0:00:02.700) 0:11:10.998 **** 2025-09-04 00:56:12.476565 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476569 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.476573 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.476577 | orchestrator | 2025-09-04 00:56:12.476581 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-04 00:56:12.476585 | orchestrator | Thursday 04 September 2025 00:56:07 +0000 (0:00:00.305) 0:11:11.304 **** 2025-09-04 00:56:12.476592 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:56:12.476596 | orchestrator | 2025-09-04 00:56:12.476600 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-04 00:56:12.476604 | orchestrator | Thursday 04 September 2025 00:56:08 +0000 (0:00:00.820) 0:11:12.124 **** 2025-09-04 00:56:12.476608 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.476612 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.476616 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.476620 | orchestrator | 2025-09-04 00:56:12.476624 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-04 00:56:12.476628 | orchestrator | Thursday 04 September 2025 00:56:08 +0000 (0:00:00.327) 0:11:12.451 **** 2025-09-04 00:56:12.476632 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476636 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:56:12.476640 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:56:12.476644 | orchestrator | 2025-09-04 00:56:12.476648 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-04 00:56:12.476653 | orchestrator | Thursday 04 September 2025 00:56:09 +0000 (0:00:00.316) 0:11:12.768 **** 2025-09-04 00:56:12.476657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:56:12.476661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:56:12.476665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:56:12.476669 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:56:12.476673 | orchestrator | 2025-09-04 00:56:12.476677 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-04 00:56:12.476681 | orchestrator | Thursday 04 September 2025 00:56:10 +0000 (0:00:01.068) 0:11:13.837 **** 2025-09-04 00:56:12.476685 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:56:12.476689 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:56:12.476693 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:56:12.476697 | orchestrator | 2025-09-04 00:56:12.476701 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:56:12.476705 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-04 00:56:12.476709 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-04 00:56:12.476713 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-04 00:56:12.476718 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-04 00:56:12.476722 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-04 00:56:12.476728 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-04 00:56:12.476732 | orchestrator | 2025-09-04 00:56:12.476739 | orchestrator | 2025-09-04 00:56:12.476743 | orchestrator | 2025-09-04 00:56:12.476747 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:56:12.476751 | orchestrator | Thursday 04 September 2025 00:56:10 +0000 (0:00:00.271) 0:11:14.108 **** 2025-09-04 00:56:12.476755 | orchestrator | =============================================================================== 2025-09-04 00:56:12.476759 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.77s 2025-09-04 00:56:12.476763 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.48s 2025-09-04 00:56:12.476767 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.94s 2025-09-04 00:56:12.476771 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.70s 2025-09-04 00:56:12.476775 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.90s 2025-09-04 00:56:12.476779 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.08s 2025-09-04 00:56:12.476783 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.88s 2025-09-04 00:56:12.476787 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.01s 2025-09-04 00:56:12.476791 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.73s 2025-09-04 00:56:12.476795 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.88s 2025-09-04 00:56:12.476799 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.69s 2025-09-04 00:56:12.476803 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.54s 2025-09-04 00:56:12.476807 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.67s 2025-09-04 00:56:12.476811 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.48s 2025-09-04 00:56:12.476815 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.07s 2025-09-04 00:56:12.476819 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.04s 2025-09-04 00:56:12.476823 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.98s 2025-09-04 00:56:12.476827 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.96s 2025-09-04 00:56:12.476831 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.91s 2025-09-04 00:56:12.476837 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.83s 2025-09-04 00:56:15.498309 | orchestrator | 2025-09-04 00:56:15 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:15.499437 | orchestrator | 2025-09-04 00:56:15 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:15.500351 | orchestrator | 2025-09-04 00:56:15 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:15.500377 | orchestrator | 2025-09-04 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:18.545891 | orchestrator | 2025-09-04 00:56:18 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:18.548454 | orchestrator | 2025-09-04 00:56:18 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:18.550135 | orchestrator | 2025-09-04 00:56:18 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:18.550157 | orchestrator | 2025-09-04 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:21.605567 | orchestrator | 2025-09-04 00:56:21 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:21.606721 | orchestrator | 2025-09-04 00:56:21 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:21.608645 | orchestrator | 2025-09-04 00:56:21 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:21.608693 | orchestrator | 2025-09-04 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:24.655896 | orchestrator | 2025-09-04 00:56:24 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:24.656539 | orchestrator | 2025-09-04 00:56:24 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:24.657648 | orchestrator | 2025-09-04 00:56:24 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:24.657760 | orchestrator | 2025-09-04 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:27.706870 | orchestrator | 2025-09-04 00:56:27 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:27.708639 | orchestrator | 2025-09-04 00:56:27 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:27.711318 | orchestrator | 2025-09-04 00:56:27 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:27.711707 | orchestrator | 2025-09-04 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:30.743093 | orchestrator | 2025-09-04 00:56:30 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:30.745047 | orchestrator | 2025-09-04 00:56:30 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:30.746346 | orchestrator | 2025-09-04 00:56:30 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:30.746368 | orchestrator | 2025-09-04 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:33.793087 | orchestrator | 2025-09-04 00:56:33 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:33.794782 | orchestrator | 2025-09-04 00:56:33 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:33.796789 | orchestrator | 2025-09-04 00:56:33 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:33.796812 | orchestrator | 2025-09-04 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:36.836803 | orchestrator | 2025-09-04 00:56:36 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:36.838875 | orchestrator | 2025-09-04 00:56:36 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:36.840682 | orchestrator | 2025-09-04 00:56:36 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:36.841046 | orchestrator | 2025-09-04 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:39.891650 | orchestrator | 2025-09-04 00:56:39 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:39.894371 | orchestrator | 2025-09-04 00:56:39 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:39.895943 | orchestrator | 2025-09-04 00:56:39 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:39.896102 | orchestrator | 2025-09-04 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:42.950828 | orchestrator | 2025-09-04 00:56:42 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:42.952171 | orchestrator | 2025-09-04 00:56:42 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:42.954477 | orchestrator | 2025-09-04 00:56:42 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:42.954579 | orchestrator | 2025-09-04 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:46.002923 | orchestrator | 2025-09-04 00:56:46 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:46.006258 | orchestrator | 2025-09-04 00:56:46 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:46.007731 | orchestrator | 2025-09-04 00:56:46 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:46.007766 | orchestrator | 2025-09-04 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:49.049928 | orchestrator | 2025-09-04 00:56:49 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:49.050119 | orchestrator | 2025-09-04 00:56:49 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:49.051730 | orchestrator | 2025-09-04 00:56:49 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:49.051762 | orchestrator | 2025-09-04 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:52.103260 | orchestrator | 2025-09-04 00:56:52 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:52.104452 | orchestrator | 2025-09-04 00:56:52 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:52.106581 | orchestrator | 2025-09-04 00:56:52 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:52.106813 | orchestrator | 2025-09-04 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:55.155585 | orchestrator | 2025-09-04 00:56:55 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:55.157884 | orchestrator | 2025-09-04 00:56:55 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:55.160559 | orchestrator | 2025-09-04 00:56:55 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:55.160677 | orchestrator | 2025-09-04 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:56:58.204801 | orchestrator | 2025-09-04 00:56:58 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:56:58.206691 | orchestrator | 2025-09-04 00:56:58 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:56:58.208265 | orchestrator | 2025-09-04 00:56:58 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:56:58.208288 | orchestrator | 2025-09-04 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:01.245476 | orchestrator | 2025-09-04 00:57:01 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:57:01.247148 | orchestrator | 2025-09-04 00:57:01 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:01.249714 | orchestrator | 2025-09-04 00:57:01 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:01.249742 | orchestrator | 2025-09-04 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:04.296291 | orchestrator | 2025-09-04 00:57:04 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:57:04.298576 | orchestrator | 2025-09-04 00:57:04 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:04.299643 | orchestrator | 2025-09-04 00:57:04 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:04.299669 | orchestrator | 2025-09-04 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:07.346672 | orchestrator | 2025-09-04 00:57:07 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state STARTED 2025-09-04 00:57:07.348999 | orchestrator | 2025-09-04 00:57:07 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:07.350423 | orchestrator | 2025-09-04 00:57:07 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:07.350949 | orchestrator | 2025-09-04 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:10.392867 | orchestrator | 2025-09-04 00:57:10 | INFO  | Task ee39899d-e493-4fca-8f85-66dbab289b46 is in state SUCCESS 2025-09-04 00:57:10.395030 | orchestrator | 2025-09-04 00:57:10.395085 | orchestrator | 2025-09-04 00:57:10.395099 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:57:10.395111 | orchestrator | 2025-09-04 00:57:10.395123 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:57:10.395135 | orchestrator | Thursday 04 September 2025 00:54:05 +0000 (0:00:00.272) 0:00:00.272 **** 2025-09-04 00:57:10.395146 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:10.395159 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:10.395170 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:10.395181 | orchestrator | 2025-09-04 00:57:10.395192 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:57:10.395204 | orchestrator | Thursday 04 September 2025 00:54:05 +0000 (0:00:00.297) 0:00:00.569 **** 2025-09-04 00:57:10.395215 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-04 00:57:10.395227 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-04 00:57:10.395239 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-04 00:57:10.395250 | orchestrator | 2025-09-04 00:57:10.395280 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-04 00:57:10.395292 | orchestrator | 2025-09-04 00:57:10.395302 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-04 00:57:10.395313 | orchestrator | Thursday 04 September 2025 00:54:05 +0000 (0:00:00.429) 0:00:00.999 **** 2025-09-04 00:57:10.395324 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:10.395335 | orchestrator | 2025-09-04 00:57:10.395346 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-04 00:57:10.395356 | orchestrator | Thursday 04 September 2025 00:54:06 +0000 (0:00:00.515) 0:00:01.515 **** 2025-09-04 00:57:10.395367 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:57:10.395378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:57:10.395389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-04 00:57:10.395399 | orchestrator | 2025-09-04 00:57:10.395410 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-04 00:57:10.395421 | orchestrator | Thursday 04 September 2025 00:54:07 +0000 (0:00:00.699) 0:00:02.214 **** 2025-09-04 00:57:10.395435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395568 | orchestrator | 2025-09-04 00:57:10.395580 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-04 00:57:10.395591 | orchestrator | Thursday 04 September 2025 00:54:08 +0000 (0:00:01.750) 0:00:03.964 **** 2025-09-04 00:57:10.395601 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:10.395612 | orchestrator | 2025-09-04 00:57:10.395625 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-04 00:57:10.395638 | orchestrator | Thursday 04 September 2025 00:54:09 +0000 (0:00:00.548) 0:00:04.513 **** 2025-09-04 00:57:10.395664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.395714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.395794 | orchestrator | 2025-09-04 00:57:10.395806 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-04 00:57:10.395819 | orchestrator | Thursday 04 September 2025 00:54:12 +0000 (0:00:02.982) 0:00:07.496 **** 2025-09-04 00:57:10.395832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.395939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.395954 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:10.395973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.395997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.396011 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:10.396023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.396041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.396084 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:10.396096 | orchestrator | 2025-09-04 00:57:10.396106 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-04 00:57:10.396117 | orchestrator | Thursday 04 September 2025 00:54:13 +0000 (0:00:00.890) 0:00:08.387 **** 2025-09-04 00:57:10.396133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.396153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.396165 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:10.396176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.396194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.396206 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:10.396217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-04 00:57:10.396240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-04 00:57:10.396252 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:10.396263 | orchestrator | 2025-09-04 00:57:10.396274 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-04 00:57:10.396285 | orchestrator | Thursday 04 September 2025 00:54:14 +0000 (0:00:01.324) 0:00:09.711 **** 2025-09-04 00:57:10.396296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396396 | orchestrator | 2025-09-04 00:57:10.396407 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-04 00:57:10.396418 | orchestrator | Thursday 04 September 2025 00:54:16 +0000 (0:00:02.345) 0:00:12.056 **** 2025-09-04 00:57:10.396428 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.396439 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:10.396450 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:10.396460 | orchestrator | 2025-09-04 00:57:10.396471 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-04 00:57:10.396482 | orchestrator | Thursday 04 September 2025 00:54:19 +0000 (0:00:03.103) 0:00:15.160 **** 2025-09-04 00:57:10.396492 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.396503 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:10.396513 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:10.396524 | orchestrator | 2025-09-04 00:57:10.396534 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-04 00:57:10.396545 | orchestrator | Thursday 04 September 2025 00:54:22 +0000 (0:00:02.068) 0:00:17.228 **** 2025-09-04 00:57:10.396560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-04 00:57:10.396608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-04 00:57:10.396664 | orchestrator | 2025-09-04 00:57:10.396674 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-04 00:57:10.396685 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:02.371) 0:00:19.599 **** 2025-09-04 00:57:10.396696 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:10.396706 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:10.396717 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:10.396727 | orchestrator | 2025-09-04 00:57:10.396738 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-04 00:57:10.396748 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.309) 0:00:19.909 **** 2025-09-04 00:57:10.396759 | orchestrator | 2025-09-04 00:57:10.396769 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-04 00:57:10.396780 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.065) 0:00:19.974 **** 2025-09-04 00:57:10.396790 | orchestrator | 2025-09-04 00:57:10.396801 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-04 00:57:10.396811 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.060) 0:00:20.034 **** 2025-09-04 00:57:10.396822 | orchestrator | 2025-09-04 00:57:10.396832 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-04 00:57:10.396843 | orchestrator | Thursday 04 September 2025 00:54:24 +0000 (0:00:00.068) 0:00:20.103 **** 2025-09-04 00:57:10.396853 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:10.396864 | orchestrator | 2025-09-04 00:57:10.396874 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-04 00:57:10.396885 | orchestrator | Thursday 04 September 2025 00:54:25 +0000 (0:00:00.198) 0:00:20.301 **** 2025-09-04 00:57:10.396895 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:10.396905 | orchestrator | 2025-09-04 00:57:10.396916 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-04 00:57:10.396926 | orchestrator | Thursday 04 September 2025 00:54:25 +0000 (0:00:00.653) 0:00:20.955 **** 2025-09-04 00:57:10.396937 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.396947 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:10.396958 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:10.396968 | orchestrator | 2025-09-04 00:57:10.396979 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-04 00:57:10.396990 | orchestrator | Thursday 04 September 2025 00:55:32 +0000 (0:01:06.635) 0:01:27.590 **** 2025-09-04 00:57:10.397000 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.397011 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:10.397021 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:10.397032 | orchestrator | 2025-09-04 00:57:10.397042 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-04 00:57:10.397078 | orchestrator | Thursday 04 September 2025 00:56:58 +0000 (0:01:26.357) 0:02:53.948 **** 2025-09-04 00:57:10.397089 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:10.397100 | orchestrator | 2025-09-04 00:57:10.397111 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-04 00:57:10.397122 | orchestrator | Thursday 04 September 2025 00:56:59 +0000 (0:00:00.520) 0:02:54.468 **** 2025-09-04 00:57:10.397133 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:10.397145 | orchestrator | 2025-09-04 00:57:10.397155 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-04 00:57:10.397166 | orchestrator | Thursday 04 September 2025 00:57:02 +0000 (0:00:02.780) 0:02:57.249 **** 2025-09-04 00:57:10.397177 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:10.397188 | orchestrator | 2025-09-04 00:57:10.397199 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-04 00:57:10.397216 | orchestrator | Thursday 04 September 2025 00:57:04 +0000 (0:00:02.156) 0:02:59.405 **** 2025-09-04 00:57:10.397227 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.397239 | orchestrator | 2025-09-04 00:57:10.397250 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-04 00:57:10.397261 | orchestrator | Thursday 04 September 2025 00:57:07 +0000 (0:00:02.794) 0:03:02.200 **** 2025-09-04 00:57:10.397272 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:10.397282 | orchestrator | 2025-09-04 00:57:10.397293 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:57:10.397305 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 00:57:10.397318 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:57:10.397333 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-04 00:57:10.397345 | orchestrator | 2025-09-04 00:57:10.397356 | orchestrator | 2025-09-04 00:57:10.397367 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:57:10.397385 | orchestrator | Thursday 04 September 2025 00:57:09 +0000 (0:00:02.671) 0:03:04.871 **** 2025-09-04 00:57:10.397397 | orchestrator | =============================================================================== 2025-09-04 00:57:10.397408 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 86.36s 2025-09-04 00:57:10.397419 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.64s 2025-09-04 00:57:10.397430 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.10s 2025-09-04 00:57:10.397440 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.98s 2025-09-04 00:57:10.397451 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.79s 2025-09-04 00:57:10.397462 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.78s 2025-09-04 00:57:10.397473 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.67s 2025-09-04 00:57:10.397484 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.37s 2025-09-04 00:57:10.397495 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.35s 2025-09-04 00:57:10.397506 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.16s 2025-09-04 00:57:10.397517 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.07s 2025-09-04 00:57:10.397528 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.75s 2025-09-04 00:57:10.397538 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.32s 2025-09-04 00:57:10.397549 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2025-09-04 00:57:10.397560 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-09-04 00:57:10.397571 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2025-09-04 00:57:10.397582 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-09-04 00:57:10.397593 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-04 00:57:10.397604 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-04 00:57:10.397615 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-04 00:57:10.397626 | orchestrator | 2025-09-04 00:57:10 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:10.397968 | orchestrator | 2025-09-04 00:57:10 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:10.398398 | orchestrator | 2025-09-04 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:13.445086 | orchestrator | 2025-09-04 00:57:13 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:13.446696 | orchestrator | 2025-09-04 00:57:13 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:13.446890 | orchestrator | 2025-09-04 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:16.483002 | orchestrator | 2025-09-04 00:57:16 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:16.483124 | orchestrator | 2025-09-04 00:57:16 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state STARTED 2025-09-04 00:57:16.483358 | orchestrator | 2025-09-04 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:19.526525 | orchestrator | 2025-09-04 00:57:19 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:19.530666 | orchestrator | 2025-09-04 00:57:19 | INFO  | Task 5eaa2de8-0d8b-4223-b728-894e26111498 is in state SUCCESS 2025-09-04 00:57:19.532507 | orchestrator | 2025-09-04 00:57:19.532539 | orchestrator | 2025-09-04 00:57:19.532552 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-04 00:57:19.532564 | orchestrator | 2025-09-04 00:57:19.532575 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-04 00:57:19.532586 | orchestrator | Thursday 04 September 2025 00:54:04 +0000 (0:00:00.106) 0:00:00.106 **** 2025-09-04 00:57:19.532597 | orchestrator | ok: [localhost] => { 2025-09-04 00:57:19.532609 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-04 00:57:19.532621 | orchestrator | } 2025-09-04 00:57:19.532632 | orchestrator | 2025-09-04 00:57:19.532643 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-04 00:57:19.532654 | orchestrator | Thursday 04 September 2025 00:54:05 +0000 (0:00:00.054) 0:00:00.161 **** 2025-09-04 00:57:19.532666 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-04 00:57:19.532677 | orchestrator | ...ignoring 2025-09-04 00:57:19.532690 | orchestrator | 2025-09-04 00:57:19.532701 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-04 00:57:19.532726 | orchestrator | Thursday 04 September 2025 00:54:07 +0000 (0:00:02.895) 0:00:03.057 **** 2025-09-04 00:57:19.532738 | orchestrator | skipping: [localhost] 2025-09-04 00:57:19.532749 | orchestrator | 2025-09-04 00:57:19.532760 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-04 00:57:19.532771 | orchestrator | Thursday 04 September 2025 00:54:07 +0000 (0:00:00.061) 0:00:03.119 **** 2025-09-04 00:57:19.532782 | orchestrator | ok: [localhost] 2025-09-04 00:57:19.532979 | orchestrator | 2025-09-04 00:57:19.532992 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:57:19.533003 | orchestrator | 2025-09-04 00:57:19.533014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:57:19.533024 | orchestrator | Thursday 04 September 2025 00:54:08 +0000 (0:00:00.175) 0:00:03.294 **** 2025-09-04 00:57:19.533035 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.533103 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.533116 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.533126 | orchestrator | 2025-09-04 00:57:19.533137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:57:19.533148 | orchestrator | Thursday 04 September 2025 00:54:08 +0000 (0:00:00.332) 0:00:03.626 **** 2025-09-04 00:57:19.533158 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-04 00:57:19.533169 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-04 00:57:19.533203 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-04 00:57:19.533214 | orchestrator | 2025-09-04 00:57:19.533225 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-04 00:57:19.533235 | orchestrator | 2025-09-04 00:57:19.533246 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-04 00:57:19.533257 | orchestrator | Thursday 04 September 2025 00:54:09 +0000 (0:00:00.546) 0:00:04.173 **** 2025-09-04 00:57:19.533268 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-04 00:57:19.533278 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-04 00:57:19.533289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-04 00:57:19.533300 | orchestrator | 2025-09-04 00:57:19.533310 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-04 00:57:19.533321 | orchestrator | Thursday 04 September 2025 00:54:09 +0000 (0:00:00.368) 0:00:04.541 **** 2025-09-04 00:57:19.533331 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:19.533342 | orchestrator | 2025-09-04 00:57:19.533353 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-04 00:57:19.533363 | orchestrator | Thursday 04 September 2025 00:54:10 +0000 (0:00:00.693) 0:00:05.235 **** 2025-09-04 00:57:19.533392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533451 | orchestrator | 2025-09-04 00:57:19.533469 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-04 00:57:19.533481 | orchestrator | Thursday 04 September 2025 00:54:13 +0000 (0:00:03.202) 0:00:08.438 **** 2025-09-04 00:57:19.533491 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.533502 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.533513 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.533523 | orchestrator | 2025-09-04 00:57:19.533534 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-04 00:57:19.533544 | orchestrator | Thursday 04 September 2025 00:54:14 +0000 (0:00:00.798) 0:00:09.237 **** 2025-09-04 00:57:19.533555 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.533566 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.533576 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.533587 | orchestrator | 2025-09-04 00:57:19.533600 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-04 00:57:19.533612 | orchestrator | Thursday 04 September 2025 00:54:15 +0000 (0:00:01.417) 0:00:10.655 **** 2025-09-04 00:57:19.533630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.533697 | orchestrator | 2025-09-04 00:57:19.533710 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-04 00:57:19.533722 | orchestrator | Thursday 04 September 2025 00:54:19 +0000 (0:00:03.980) 0:00:14.636 **** 2025-09-04 00:57:19.533734 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.533747 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.533759 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.533771 | orchestrator | 2025-09-04 00:57:19.533784 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-04 00:57:19.533796 | orchestrator | Thursday 04 September 2025 00:54:20 +0000 (0:00:01.110) 0:00:15.747 **** 2025-09-04 00:57:19.533809 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.533821 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:19.533833 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:19.533846 | orchestrator | 2025-09-04 00:57:19.533858 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-04 00:57:19.533870 | orchestrator | Thursday 04 September 2025 00:54:25 +0000 (0:00:04.818) 0:00:20.566 **** 2025-09-04 00:57:19.533883 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:19.533895 | orchestrator | 2025-09-04 00:57:19.533908 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-04 00:57:19.533920 | orchestrator | Thursday 04 September 2025 00:54:26 +0000 (0:00:00.609) 0:00:21.175 **** 2025-09-04 00:57:19.533943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.533963 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.533980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.533992 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534084 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534096 | orchestrator | 2025-09-04 00:57:19.534107 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-04 00:57:19.534118 | orchestrator | Thursday 04 September 2025 00:54:29 +0000 (0:00:03.269) 0:00:24.445 **** 2025-09-04 00:57:19.534134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534146 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534189 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.534206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534217 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534228 | orchestrator | 2025-09-04 00:57:19.534239 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-04 00:57:19.534249 | orchestrator | Thursday 04 September 2025 00:54:31 +0000 (0:00:02.679) 0:00:27.124 **** 2025-09-04 00:57:19.534261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534279 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534316 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.534327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-04 00:57:19.534345 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534356 | orchestrator | 2025-09-04 00:57:19.534366 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-04 00:57:19.534377 | orchestrator | Thursday 04 September 2025 00:54:34 +0000 (0:00:02.604) 0:00:29.729 **** 2025-09-04 00:57:19.534400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.534413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.534438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-04 00:57:19.534456 | orchestrator | 2025-09-04 00:57:19.534468 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-04 00:57:19.534478 | orchestrator | Thursday 04 September 2025 00:54:37 +0000 (0:00:03.384) 0:00:33.114 **** 2025-09-04 00:57:19.534489 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.534499 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:19.534510 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:19.534520 | orchestrator | 2025-09-04 00:57:19.534531 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-04 00:57:19.534542 | orchestrator | Thursday 04 September 2025 00:54:38 +0000 (0:00:00.998) 0:00:34.113 **** 2025-09-04 00:57:19.534552 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.534563 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.534573 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.534583 | orchestrator | 2025-09-04 00:57:19.534594 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-04 00:57:19.534604 | orchestrator | Thursday 04 September 2025 00:54:39 +0000 (0:00:00.563) 0:00:34.676 **** 2025-09-04 00:57:19.534615 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.534625 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.534636 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.534646 | orchestrator | 2025-09-04 00:57:19.534657 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-04 00:57:19.534667 | orchestrator | Thursday 04 September 2025 00:54:39 +0000 (0:00:00.342) 0:00:35.019 **** 2025-09-04 00:57:19.534679 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-04 00:57:19.534689 | orchestrator | ...ignoring 2025-09-04 00:57:19.534700 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-04 00:57:19.534711 | orchestrator | ...ignoring 2025-09-04 00:57:19.534721 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-04 00:57:19.534732 | orchestrator | ...ignoring 2025-09-04 00:57:19.534748 | orchestrator | 2025-09-04 00:57:19.534759 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-04 00:57:19.534770 | orchestrator | Thursday 04 September 2025 00:54:50 +0000 (0:00:10.951) 0:00:45.970 **** 2025-09-04 00:57:19.534780 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.534791 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.534801 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.534811 | orchestrator | 2025-09-04 00:57:19.534822 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-04 00:57:19.534832 | orchestrator | Thursday 04 September 2025 00:54:51 +0000 (0:00:00.439) 0:00:46.410 **** 2025-09-04 00:57:19.534843 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534853 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534864 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.534874 | orchestrator | 2025-09-04 00:57:19.534885 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-04 00:57:19.534895 | orchestrator | Thursday 04 September 2025 00:54:51 +0000 (0:00:00.642) 0:00:47.052 **** 2025-09-04 00:57:19.534905 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534916 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534926 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.534936 | orchestrator | 2025-09-04 00:57:19.534947 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-04 00:57:19.534958 | orchestrator | Thursday 04 September 2025 00:54:52 +0000 (0:00:00.473) 0:00:47.526 **** 2025-09-04 00:57:19.534968 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.534978 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.534989 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.534999 | orchestrator | 2025-09-04 00:57:19.535010 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-04 00:57:19.535020 | orchestrator | Thursday 04 September 2025 00:54:52 +0000 (0:00:00.449) 0:00:47.975 **** 2025-09-04 00:57:19.535031 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.535041 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.535066 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.535076 | orchestrator | 2025-09-04 00:57:19.535087 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-04 00:57:19.535098 | orchestrator | Thursday 04 September 2025 00:54:53 +0000 (0:00:00.388) 0:00:48.364 **** 2025-09-04 00:57:19.535114 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.535125 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.535136 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.535147 | orchestrator | 2025-09-04 00:57:19.535158 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-04 00:57:19.535169 | orchestrator | Thursday 04 September 2025 00:54:53 +0000 (0:00:00.609) 0:00:48.973 **** 2025-09-04 00:57:19.535179 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.535190 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.535200 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-04 00:57:19.535211 | orchestrator | 2025-09-04 00:57:19.535222 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-04 00:57:19.535232 | orchestrator | Thursday 04 September 2025 00:54:54 +0000 (0:00:00.398) 0:00:49.372 **** 2025-09-04 00:57:19.535243 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.535253 | orchestrator | 2025-09-04 00:57:19.535264 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-04 00:57:19.535275 | orchestrator | Thursday 04 September 2025 00:55:04 +0000 (0:00:10.154) 0:00:59.527 **** 2025-09-04 00:57:19.535285 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.535296 | orchestrator | 2025-09-04 00:57:19.535311 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-04 00:57:19.535322 | orchestrator | Thursday 04 September 2025 00:55:04 +0000 (0:00:00.115) 0:00:59.643 **** 2025-09-04 00:57:19.535333 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.535349 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.535360 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.535371 | orchestrator | 2025-09-04 00:57:19.535381 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-04 00:57:19.535392 | orchestrator | Thursday 04 September 2025 00:55:05 +0000 (0:00:00.989) 0:01:00.633 **** 2025-09-04 00:57:19.535402 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.535413 | orchestrator | 2025-09-04 00:57:19.535424 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-04 00:57:19.535434 | orchestrator | Thursday 04 September 2025 00:55:13 +0000 (0:00:07.596) 0:01:08.230 **** 2025-09-04 00:57:19.535445 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.535455 | orchestrator | 2025-09-04 00:57:19.535466 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-04 00:57:19.535477 | orchestrator | Thursday 04 September 2025 00:55:14 +0000 (0:00:01.590) 0:01:09.820 **** 2025-09-04 00:57:19.535488 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.535498 | orchestrator | 2025-09-04 00:57:19.535509 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-04 00:57:19.535519 | orchestrator | Thursday 04 September 2025 00:55:17 +0000 (0:00:02.519) 0:01:12.340 **** 2025-09-04 00:57:19.535530 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.535541 | orchestrator | 2025-09-04 00:57:19.535552 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-04 00:57:19.535562 | orchestrator | Thursday 04 September 2025 00:55:17 +0000 (0:00:00.111) 0:01:12.452 **** 2025-09-04 00:57:19.535573 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.535583 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.535594 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.535604 | orchestrator | 2025-09-04 00:57:19.535615 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-04 00:57:19.535626 | orchestrator | Thursday 04 September 2025 00:55:17 +0000 (0:00:00.328) 0:01:12.780 **** 2025-09-04 00:57:19.535636 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.535647 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-04 00:57:19.535658 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:19.535668 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:19.535679 | orchestrator | 2025-09-04 00:57:19.535690 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-04 00:57:19.535700 | orchestrator | skipping: no hosts matched 2025-09-04 00:57:19.535711 | orchestrator | 2025-09-04 00:57:19.535722 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-04 00:57:19.535732 | orchestrator | 2025-09-04 00:57:19.535743 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-04 00:57:19.535753 | orchestrator | Thursday 04 September 2025 00:55:18 +0000 (0:00:00.588) 0:01:13.369 **** 2025-09-04 00:57:19.535764 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:57:19.535775 | orchestrator | 2025-09-04 00:57:19.535785 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-04 00:57:19.535796 | orchestrator | Thursday 04 September 2025 00:55:42 +0000 (0:00:23.792) 0:01:37.161 **** 2025-09-04 00:57:19.535806 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.535817 | orchestrator | 2025-09-04 00:57:19.535827 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-04 00:57:19.535838 | orchestrator | Thursday 04 September 2025 00:55:57 +0000 (0:00:15.549) 0:01:52.711 **** 2025-09-04 00:57:19.535848 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.535859 | orchestrator | 2025-09-04 00:57:19.535869 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-04 00:57:19.535880 | orchestrator | 2025-09-04 00:57:19.535891 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-04 00:57:19.535901 | orchestrator | Thursday 04 September 2025 00:55:59 +0000 (0:00:02.413) 0:01:55.124 **** 2025-09-04 00:57:19.535918 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:57:19.535929 | orchestrator | 2025-09-04 00:57:19.535939 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-04 00:57:19.535950 | orchestrator | Thursday 04 September 2025 00:56:26 +0000 (0:00:26.268) 0:02:21.393 **** 2025-09-04 00:57:19.535960 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.535971 | orchestrator | 2025-09-04 00:57:19.535981 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-04 00:57:19.535992 | orchestrator | Thursday 04 September 2025 00:56:41 +0000 (0:00:15.542) 0:02:36.936 **** 2025-09-04 00:57:19.536003 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.536013 | orchestrator | 2025-09-04 00:57:19.536024 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-04 00:57:19.536034 | orchestrator | 2025-09-04 00:57:19.536064 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-04 00:57:19.536076 | orchestrator | Thursday 04 September 2025 00:56:44 +0000 (0:00:02.514) 0:02:39.450 **** 2025-09-04 00:57:19.536087 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.536097 | orchestrator | 2025-09-04 00:57:19.536108 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-04 00:57:19.536118 | orchestrator | Thursday 04 September 2025 00:56:56 +0000 (0:00:11.904) 0:02:51.354 **** 2025-09-04 00:57:19.536129 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.536139 | orchestrator | 2025-09-04 00:57:19.536150 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-04 00:57:19.536161 | orchestrator | Thursday 04 September 2025 00:57:02 +0000 (0:00:06.563) 0:02:57.918 **** 2025-09-04 00:57:19.536171 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.536182 | orchestrator | 2025-09-04 00:57:19.536193 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-04 00:57:19.536203 | orchestrator | 2025-09-04 00:57:19.536214 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-04 00:57:19.536224 | orchestrator | Thursday 04 September 2025 00:57:05 +0000 (0:00:02.767) 0:03:00.685 **** 2025-09-04 00:57:19.536235 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:57:19.536245 | orchestrator | 2025-09-04 00:57:19.536256 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-04 00:57:19.536267 | orchestrator | Thursday 04 September 2025 00:57:06 +0000 (0:00:00.557) 0:03:01.243 **** 2025-09-04 00:57:19.536277 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.536288 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.536299 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.536309 | orchestrator | 2025-09-04 00:57:19.536320 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-04 00:57:19.536330 | orchestrator | Thursday 04 September 2025 00:57:08 +0000 (0:00:02.411) 0:03:03.655 **** 2025-09-04 00:57:19.536341 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.536351 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.536362 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.536372 | orchestrator | 2025-09-04 00:57:19.536383 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-04 00:57:19.536394 | orchestrator | Thursday 04 September 2025 00:57:10 +0000 (0:00:02.345) 0:03:06.001 **** 2025-09-04 00:57:19.536404 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.536415 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.536425 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.536436 | orchestrator | 2025-09-04 00:57:19.536447 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-04 00:57:19.536457 | orchestrator | Thursday 04 September 2025 00:57:13 +0000 (0:00:02.188) 0:03:08.190 **** 2025-09-04 00:57:19.536468 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.536478 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.536489 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:57:19.536506 | orchestrator | 2025-09-04 00:57:19.536516 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-04 00:57:19.536527 | orchestrator | Thursday 04 September 2025 00:57:15 +0000 (0:00:02.111) 0:03:10.302 **** 2025-09-04 00:57:19.536537 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:57:19.536548 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:57:19.536559 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:57:19.536569 | orchestrator | 2025-09-04 00:57:19.536580 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-04 00:57:19.536591 | orchestrator | Thursday 04 September 2025 00:57:18 +0000 (0:00:02.942) 0:03:13.244 **** 2025-09-04 00:57:19.536601 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:57:19.536612 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:57:19.536622 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:57:19.536633 | orchestrator | 2025-09-04 00:57:19.536643 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:57:19.536654 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-04 00:57:19.536665 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-04 00:57:19.536677 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-04 00:57:19.536688 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-04 00:57:19.536699 | orchestrator | 2025-09-04 00:57:19.536709 | orchestrator | 2025-09-04 00:57:19.536720 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:57:19.536730 | orchestrator | Thursday 04 September 2025 00:57:18 +0000 (0:00:00.419) 0:03:13.663 **** 2025-09-04 00:57:19.536741 | orchestrator | =============================================================================== 2025-09-04 00:57:19.536751 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 50.06s 2025-09-04 00:57:19.536762 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.09s 2025-09-04 00:57:19.536773 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.90s 2025-09-04 00:57:19.536783 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2025-09-04 00:57:19.536794 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.15s 2025-09-04 00:57:19.536804 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.60s 2025-09-04 00:57:19.536900 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 6.56s 2025-09-04 00:57:19.536922 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.93s 2025-09-04 00:57:19.536932 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.82s 2025-09-04 00:57:19.536943 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.98s 2025-09-04 00:57:19.536953 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.38s 2025-09-04 00:57:19.536964 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.27s 2025-09-04 00:57:19.536974 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.20s 2025-09-04 00:57:19.536985 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.94s 2025-09-04 00:57:19.536995 | orchestrator | Check MariaDB service --------------------------------------------------- 2.90s 2025-09-04 00:57:19.537006 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.77s 2025-09-04 00:57:19.537016 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.68s 2025-09-04 00:57:19.537027 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.60s 2025-09-04 00:57:19.537073 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.52s 2025-09-04 00:57:19.537086 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.41s 2025-09-04 00:57:19.537097 | orchestrator | 2025-09-04 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:22.574742 | orchestrator | 2025-09-04 00:57:22 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:22.575916 | orchestrator | 2025-09-04 00:57:22 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:22.578750 | orchestrator | 2025-09-04 00:57:22 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:22.579249 | orchestrator | 2025-09-04 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:25.630799 | orchestrator | 2025-09-04 00:57:25 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:25.632469 | orchestrator | 2025-09-04 00:57:25 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:25.633682 | orchestrator | 2025-09-04 00:57:25 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:25.633862 | orchestrator | 2025-09-04 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:28.681253 | orchestrator | 2025-09-04 00:57:28 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:28.681333 | orchestrator | 2025-09-04 00:57:28 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:28.681348 | orchestrator | 2025-09-04 00:57:28 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:28.681477 | orchestrator | 2025-09-04 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:31.720098 | orchestrator | 2025-09-04 00:57:31 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:31.724213 | orchestrator | 2025-09-04 00:57:31 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:31.725899 | orchestrator | 2025-09-04 00:57:31 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:31.725930 | orchestrator | 2025-09-04 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:34.768715 | orchestrator | 2025-09-04 00:57:34 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:34.768806 | orchestrator | 2025-09-04 00:57:34 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:34.769792 | orchestrator | 2025-09-04 00:57:34 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:34.769824 | orchestrator | 2025-09-04 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:37.821122 | orchestrator | 2025-09-04 00:57:37 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:37.825546 | orchestrator | 2025-09-04 00:57:37 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:37.826630 | orchestrator | 2025-09-04 00:57:37 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:37.826658 | orchestrator | 2025-09-04 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:40.855955 | orchestrator | 2025-09-04 00:57:40 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:40.857406 | orchestrator | 2025-09-04 00:57:40 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:40.857968 | orchestrator | 2025-09-04 00:57:40 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:40.859402 | orchestrator | 2025-09-04 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:43.902173 | orchestrator | 2025-09-04 00:57:43 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:43.902985 | orchestrator | 2025-09-04 00:57:43 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:43.905371 | orchestrator | 2025-09-04 00:57:43 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:43.905396 | orchestrator | 2025-09-04 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:46.938942 | orchestrator | 2025-09-04 00:57:46 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:46.939455 | orchestrator | 2025-09-04 00:57:46 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:46.940880 | orchestrator | 2025-09-04 00:57:46 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:46.940911 | orchestrator | 2025-09-04 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:49.980927 | orchestrator | 2025-09-04 00:57:49 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:49.982419 | orchestrator | 2025-09-04 00:57:49 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:49.984834 | orchestrator | 2025-09-04 00:57:49 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:49.985202 | orchestrator | 2025-09-04 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:53.020957 | orchestrator | 2025-09-04 00:57:53 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:53.023754 | orchestrator | 2025-09-04 00:57:53 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:53.026332 | orchestrator | 2025-09-04 00:57:53 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:53.026645 | orchestrator | 2025-09-04 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:56.071506 | orchestrator | 2025-09-04 00:57:56 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:56.074759 | orchestrator | 2025-09-04 00:57:56 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:56.077049 | orchestrator | 2025-09-04 00:57:56 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:56.077097 | orchestrator | 2025-09-04 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:57:59.115446 | orchestrator | 2025-09-04 00:57:59 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:57:59.116156 | orchestrator | 2025-09-04 00:57:59 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:57:59.117308 | orchestrator | 2025-09-04 00:57:59 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:57:59.117520 | orchestrator | 2025-09-04 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:02.158900 | orchestrator | 2025-09-04 00:58:02 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:02.160778 | orchestrator | 2025-09-04 00:58:02 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:02.163929 | orchestrator | 2025-09-04 00:58:02 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:02.164242 | orchestrator | 2025-09-04 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:05.203691 | orchestrator | 2025-09-04 00:58:05 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:05.206334 | orchestrator | 2025-09-04 00:58:05 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:05.208323 | orchestrator | 2025-09-04 00:58:05 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:05.208351 | orchestrator | 2025-09-04 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:08.247639 | orchestrator | 2025-09-04 00:58:08 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:08.248263 | orchestrator | 2025-09-04 00:58:08 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:08.249193 | orchestrator | 2025-09-04 00:58:08 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:08.249217 | orchestrator | 2025-09-04 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:11.288131 | orchestrator | 2025-09-04 00:58:11 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:11.288361 | orchestrator | 2025-09-04 00:58:11 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:11.290452 | orchestrator | 2025-09-04 00:58:11 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:11.290478 | orchestrator | 2025-09-04 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:14.323654 | orchestrator | 2025-09-04 00:58:14 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:14.324770 | orchestrator | 2025-09-04 00:58:14 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:14.326929 | orchestrator | 2025-09-04 00:58:14 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:14.326974 | orchestrator | 2025-09-04 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:17.377929 | orchestrator | 2025-09-04 00:58:17 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:17.380201 | orchestrator | 2025-09-04 00:58:17 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:17.382797 | orchestrator | 2025-09-04 00:58:17 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:17.384420 | orchestrator | 2025-09-04 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:20.431337 | orchestrator | 2025-09-04 00:58:20 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:20.434997 | orchestrator | 2025-09-04 00:58:20 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:20.437729 | orchestrator | 2025-09-04 00:58:20 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:20.437752 | orchestrator | 2025-09-04 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:23.482218 | orchestrator | 2025-09-04 00:58:23 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state STARTED 2025-09-04 00:58:23.485864 | orchestrator | 2025-09-04 00:58:23 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:23.488515 | orchestrator | 2025-09-04 00:58:23 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:23.488583 | orchestrator | 2025-09-04 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:26.541780 | orchestrator | 2025-09-04 00:58:26 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:26.544297 | orchestrator | 2025-09-04 00:58:26 | INFO  | Task 6f36cbd2-ca76-495e-ac06-a32a9520e4a1 is in state SUCCESS 2025-09-04 00:58:26.546643 | orchestrator | 2025-09-04 00:58:26.547231 | orchestrator | 2025-09-04 00:58:26.547249 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-04 00:58:26.547262 | orchestrator | 2025-09-04 00:58:26.547273 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-04 00:58:26.547285 | orchestrator | Thursday 04 September 2025 00:56:15 +0000 (0:00:00.620) 0:00:00.620 **** 2025-09-04 00:58:26.547297 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:58:26.547309 | orchestrator | 2025-09-04 00:58:26.547320 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-04 00:58:26.547332 | orchestrator | Thursday 04 September 2025 00:56:16 +0000 (0:00:00.653) 0:00:01.274 **** 2025-09-04 00:58:26.547343 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547356 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547367 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547378 | orchestrator | 2025-09-04 00:58:26.547389 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-04 00:58:26.547400 | orchestrator | Thursday 04 September 2025 00:56:16 +0000 (0:00:00.669) 0:00:01.944 **** 2025-09-04 00:58:26.547411 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547422 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547433 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547443 | orchestrator | 2025-09-04 00:58:26.547512 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-04 00:58:26.547528 | orchestrator | Thursday 04 September 2025 00:56:16 +0000 (0:00:00.272) 0:00:02.217 **** 2025-09-04 00:58:26.547540 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547551 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547561 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547572 | orchestrator | 2025-09-04 00:58:26.547583 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-04 00:58:26.547594 | orchestrator | Thursday 04 September 2025 00:56:17 +0000 (0:00:00.769) 0:00:02.986 **** 2025-09-04 00:58:26.547605 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547615 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547626 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547637 | orchestrator | 2025-09-04 00:58:26.547648 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-04 00:58:26.547658 | orchestrator | Thursday 04 September 2025 00:56:18 +0000 (0:00:00.303) 0:00:03.290 **** 2025-09-04 00:58:26.547669 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547680 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547690 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547701 | orchestrator | 2025-09-04 00:58:26.547712 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-04 00:58:26.547723 | orchestrator | Thursday 04 September 2025 00:56:18 +0000 (0:00:00.331) 0:00:03.622 **** 2025-09-04 00:58:26.547733 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547744 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547755 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547766 | orchestrator | 2025-09-04 00:58:26.547777 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-04 00:58:26.547787 | orchestrator | Thursday 04 September 2025 00:56:18 +0000 (0:00:00.309) 0:00:03.931 **** 2025-09-04 00:58:26.547798 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.547810 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.547821 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.547833 | orchestrator | 2025-09-04 00:58:26.547883 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-04 00:58:26.547895 | orchestrator | Thursday 04 September 2025 00:56:19 +0000 (0:00:00.502) 0:00:04.434 **** 2025-09-04 00:58:26.547906 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.547916 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.547927 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.547938 | orchestrator | 2025-09-04 00:58:26.547948 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-04 00:58:26.547959 | orchestrator | Thursday 04 September 2025 00:56:19 +0000 (0:00:00.300) 0:00:04.735 **** 2025-09-04 00:58:26.547970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:58:26.547980 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:58:26.547991 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:58:26.548001 | orchestrator | 2025-09-04 00:58:26.548012 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-04 00:58:26.548022 | orchestrator | Thursday 04 September 2025 00:56:20 +0000 (0:00:00.679) 0:00:05.414 **** 2025-09-04 00:58:26.548033 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.548044 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.548054 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.548087 | orchestrator | 2025-09-04 00:58:26.548099 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-04 00:58:26.548109 | orchestrator | Thursday 04 September 2025 00:56:20 +0000 (0:00:00.412) 0:00:05.826 **** 2025-09-04 00:58:26.548120 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:58:26.548130 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:58:26.548141 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:58:26.548152 | orchestrator | 2025-09-04 00:58:26.548164 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-04 00:58:26.548177 | orchestrator | Thursday 04 September 2025 00:56:22 +0000 (0:00:02.032) 0:00:07.859 **** 2025-09-04 00:58:26.548190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-04 00:58:26.548205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-04 00:58:26.548217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-04 00:58:26.548230 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548243 | orchestrator | 2025-09-04 00:58:26.548255 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-04 00:58:26.548310 | orchestrator | Thursday 04 September 2025 00:56:23 +0000 (0:00:00.382) 0:00:08.241 **** 2025-09-04 00:58:26.548327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548343 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548369 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548382 | orchestrator | 2025-09-04 00:58:26.548394 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-04 00:58:26.548407 | orchestrator | Thursday 04 September 2025 00:56:23 +0000 (0:00:00.773) 0:00:09.014 **** 2025-09-04 00:58:26.548422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.548478 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548491 | orchestrator | 2025-09-04 00:58:26.548504 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-04 00:58:26.548516 | orchestrator | Thursday 04 September 2025 00:56:23 +0000 (0:00:00.167) 0:00:09.182 **** 2025-09-04 00:58:26.548529 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '185c02bbc349', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-04 00:56:21.226725', 'end': '2025-09-04 00:56:21.274578', 'delta': '0:00:00.047853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['185c02bbc349'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-04 00:58:26.548543 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fe63b36f8ea6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-04 00:56:21.962897', 'end': '2025-09-04 00:56:21.998154', 'delta': '0:00:00.035257', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fe63b36f8ea6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-04 00:58:26.548586 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f0aea18e8b26', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-04 00:56:22.463917', 'end': '2025-09-04 00:56:22.506053', 'delta': '0:00:00.042136', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f0aea18e8b26'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-04 00:58:26.548599 | orchestrator | 2025-09-04 00:58:26.548610 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-04 00:58:26.548628 | orchestrator | Thursday 04 September 2025 00:56:24 +0000 (0:00:00.350) 0:00:09.533 **** 2025-09-04 00:58:26.548639 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.548650 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.548661 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.548672 | orchestrator | 2025-09-04 00:58:26.548682 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-04 00:58:26.548693 | orchestrator | Thursday 04 September 2025 00:56:24 +0000 (0:00:00.439) 0:00:09.973 **** 2025-09-04 00:58:26.548704 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-04 00:58:26.548714 | orchestrator | 2025-09-04 00:58:26.548725 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-04 00:58:26.548736 | orchestrator | Thursday 04 September 2025 00:56:26 +0000 (0:00:01.522) 0:00:11.496 **** 2025-09-04 00:58:26.548746 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548757 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.548768 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.548779 | orchestrator | 2025-09-04 00:58:26.548789 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-04 00:58:26.548800 | orchestrator | Thursday 04 September 2025 00:56:26 +0000 (0:00:00.299) 0:00:11.795 **** 2025-09-04 00:58:26.548810 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548821 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.548832 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.548843 | orchestrator | 2025-09-04 00:58:26.548853 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-04 00:58:26.548864 | orchestrator | Thursday 04 September 2025 00:56:26 +0000 (0:00:00.414) 0:00:12.210 **** 2025-09-04 00:58:26.548875 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548886 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.548896 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.548907 | orchestrator | 2025-09-04 00:58:26.548918 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-04 00:58:26.548929 | orchestrator | Thursday 04 September 2025 00:56:27 +0000 (0:00:00.497) 0:00:12.707 **** 2025-09-04 00:58:26.548940 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.548950 | orchestrator | 2025-09-04 00:58:26.548961 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-04 00:58:26.548976 | orchestrator | Thursday 04 September 2025 00:56:27 +0000 (0:00:00.130) 0:00:12.838 **** 2025-09-04 00:58:26.548987 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.548998 | orchestrator | 2025-09-04 00:58:26.549009 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-04 00:58:26.549020 | orchestrator | Thursday 04 September 2025 00:56:27 +0000 (0:00:00.223) 0:00:13.061 **** 2025-09-04 00:58:26.549030 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549041 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549052 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549107 | orchestrator | 2025-09-04 00:58:26.549120 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-04 00:58:26.549130 | orchestrator | Thursday 04 September 2025 00:56:28 +0000 (0:00:00.276) 0:00:13.338 **** 2025-09-04 00:58:26.549141 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549152 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549163 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549173 | orchestrator | 2025-09-04 00:58:26.549184 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-04 00:58:26.549195 | orchestrator | Thursday 04 September 2025 00:56:28 +0000 (0:00:00.315) 0:00:13.653 **** 2025-09-04 00:58:26.549205 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549216 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549226 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549237 | orchestrator | 2025-09-04 00:58:26.549248 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-04 00:58:26.549267 | orchestrator | Thursday 04 September 2025 00:56:28 +0000 (0:00:00.532) 0:00:14.185 **** 2025-09-04 00:58:26.549370 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549382 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549393 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549404 | orchestrator | 2025-09-04 00:58:26.549414 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-04 00:58:26.549425 | orchestrator | Thursday 04 September 2025 00:56:29 +0000 (0:00:00.310) 0:00:14.496 **** 2025-09-04 00:58:26.549436 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549447 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549458 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549468 | orchestrator | 2025-09-04 00:58:26.549479 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-04 00:58:26.549490 | orchestrator | Thursday 04 September 2025 00:56:29 +0000 (0:00:00.288) 0:00:14.785 **** 2025-09-04 00:58:26.549500 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549511 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549522 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549533 | orchestrator | 2025-09-04 00:58:26.549544 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-04 00:58:26.549585 | orchestrator | Thursday 04 September 2025 00:56:29 +0000 (0:00:00.322) 0:00:15.108 **** 2025-09-04 00:58:26.549598 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.549609 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.549619 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.549630 | orchestrator | 2025-09-04 00:58:26.549641 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-04 00:58:26.549652 | orchestrator | Thursday 04 September 2025 00:56:30 +0000 (0:00:00.528) 0:00:15.636 **** 2025-09-04 00:58:26.549664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302', 'dm-uuid-LVM-4vjUyByvbTXE2cfhT7hlQJZ9LjnIa5lcePmGsAiEDknKBRRuXTfUfJ1EIzE0QhFk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0', 'dm-uuid-LVM-eekyZqxUzJD44xvcGvdxubJyRd3iPZ8aI462i0beb8i872Gol5wCt22vfCvj5oW0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.549856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5AfiQI-1laC-THaR-1i1Z-gMhy-NdET-RnIBVH', 'scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa', 'scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.549898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ferPyo-yAsd-4A40-ipP1-yum7-Q4gw-IDZAP0', 'scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc', 'scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.549911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637', 'scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.549925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.549941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4', 'dm-uuid-LVM-M4Gv3tPZtWstc4V6EMcB6GcGcxkJLygbCgznPhTWcNnLUwfcj96SpOUmAxnQUtyl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b', 'dm-uuid-LVM-BF4TTHSAu0Ubz13YxHfu7w3gnjdTioaP9Ve11VKbcW0EjtMkZ80KCjjDPwxmxizG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.549993 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.550142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b', 'dm-uuid-LVM-RgfZvjfqrEZYQe7xidzbnkMdq2kk7SfUzDVMMUqdThVgyrunAKGu0TVv8LGAXis6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2SSp5E-3yAW-Tbzh-mbWS-dokf-F4dZ-pZhPAG', 'scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f', 'scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d', 'dm-uuid-LVM-4QPi7SlwXSqadW2vH7XkyFokqG8HVkr7YqDSWgMIcVKOvznkX8XTXJglZn1wCQPL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CdcaIX-bgqq-oQhe-4pd6-ASKm-1LJD-QCGPEZ', 'scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6', 'scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77', 'scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550398 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.550408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-04 00:58:26.550485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLO88k-jG6Z-1XWr-vKrj-ZYf3-iQib-Q9fjpq', 'scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1', 'scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mdT39u-qveb-3l9j-3x7I-kS0Q-PhUc-MrZPOh', 'scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a', 'scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e', 'scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-04 00:58:26.550552 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.550562 | orchestrator | 2025-09-04 00:58:26.550572 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-04 00:58:26.550581 | orchestrator | Thursday 04 September 2025 00:56:31 +0000 (0:00:00.617) 0:00:16.254 **** 2025-09-04 00:58:26.550592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302', 'dm-uuid-LVM-4vjUyByvbTXE2cfhT7hlQJZ9LjnIa5lcePmGsAiEDknKBRRuXTfUfJ1EIzE0QhFk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0', 'dm-uuid-LVM-eekyZqxUzJD44xvcGvdxubJyRd3iPZ8aI462i0beb8i872Gol5wCt22vfCvj5oW0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4', 'dm-uuid-LVM-M4Gv3tPZtWstc4V6EMcB6GcGcxkJLygbCgznPhTWcNnLUwfcj96SpOUmAxnQUtyl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b', 'dm-uuid-LVM-BF4TTHSAu0Ubz13YxHfu7w3gnjdTioaP9Ve11VKbcW0EjtMkZ80KCjjDPwxmxizG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16', 'scsi-SQEMU_QEMU_HARDDISK_0d7986e8-ee88-4385-890c-4417efeba472-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e3ae8186--8e2b--5882--a4eb--fc7766e33302-osd--block--e3ae8186--8e2b--5882--a4eb--fc7766e33302'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5AfiQI-1laC-THaR-1i1Z-gMhy-NdET-RnIBVH', 'scsi-0QEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa', 'scsi-SQEMU_QEMU_HARDDISK_7a7dbb75-788e-43b1-b83f-869b645534fa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7cbfcabc--5503--586a--bc65--b738adbafdd0-osd--block--7cbfcabc--5503--586a--bc65--b738adbafdd0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ferPyo-yAsd-4A40-ipP1-yum7-Q4gw-IDZAP0', 'scsi-0QEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc', 'scsi-SQEMU_QEMU_HARDDISK_9f385429-6afa-4c66-a502-4318c5de8bbc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637', 'scsi-SQEMU_QEMU_HARDDISK_061ec2a1-a81e-446f-a16e-21a91d07e637'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550871 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550890 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.550900 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_1bb9a345-bda2-451d-96bf-1b31ebdd8b0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--23145802--20b8--57b1--85d3--97c3024bf9a4-osd--block--23145802--20b8--57b1--85d3--97c3024bf9a4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2SSp5E-3yAW-Tbzh-mbWS-dokf-F4dZ-pZhPAG', 'scsi-0QEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f', 'scsi-SQEMU_QEMU_HARDDISK_29dd0003-fb59-4bf4-b86e-44c71c2fb42f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b-osd--block--f7bb41d6--3a07--594b--a5ed--81bcd3eca58b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CdcaIX-bgqq-oQhe-4pd6-ASKm-1LJD-QCGPEZ', 'scsi-0QEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6', 'scsi-SQEMU_QEMU_HARDDISK_1db7e25a-216c-47db-963d-7c5f28c46cb6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.550985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b', 'dm-uuid-LVM-RgfZvjfqrEZYQe7xidzbnkMdq2kk7SfUzDVMMUqdThVgyrunAKGu0TVv8LGAXis6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77', 'scsi-SQEMU_QEMU_HARDDISK_10337e6d-8da5-41a0-908c-89b7c0023a77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d', 'dm-uuid-LVM-4QPi7SlwXSqadW2vH7XkyFokqG8HVkr7YqDSWgMIcVKOvznkX8XTXJglZn1wCQPL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551052 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551131 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551172 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1acc59b-6cf7-43a3-afb9-a02c934829cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4210a630--ee04--544c--a974--f1a5ed1af07b-osd--block--4210a630--ee04--544c--a974--f1a5ed1af07b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CLO88k-jG6Z-1XWr-vKrj-ZYf3-iQib-Q9fjpq', 'scsi-0QEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1', 'scsi-SQEMU_QEMU_HARDDISK_d9ebab6f-eb3e-4afd-8760-ef197f6a89e1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f811dc1d--ef4c--5806--9269--19449158751d-osd--block--f811dc1d--ef4c--5806--9269--19449158751d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mdT39u-qveb-3l9j-3x7I-kS0Q-PhUc-MrZPOh', 'scsi-0QEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a', 'scsi-SQEMU_QEMU_HARDDISK_bcfba7c6-2fb5-4d57-9e19-86dc2ee83b3a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e', 'scsi-SQEMU_QEMU_HARDDISK_a760aba3-b640-428b-b8ab-7cd9fc53fd0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551235 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-04-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-04 00:58:26.551251 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551261 | orchestrator | 2025-09-04 00:58:26.551270 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-04 00:58:26.551280 | orchestrator | Thursday 04 September 2025 00:56:31 +0000 (0:00:00.607) 0:00:16.861 **** 2025-09-04 00:58:26.551290 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.551299 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.551309 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.551318 | orchestrator | 2025-09-04 00:58:26.551328 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-04 00:58:26.551337 | orchestrator | Thursday 04 September 2025 00:56:32 +0000 (0:00:00.679) 0:00:17.540 **** 2025-09-04 00:58:26.551346 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.551356 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.551365 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.551374 | orchestrator | 2025-09-04 00:58:26.551384 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-04 00:58:26.551393 | orchestrator | Thursday 04 September 2025 00:56:32 +0000 (0:00:00.466) 0:00:18.007 **** 2025-09-04 00:58:26.551403 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.551412 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.551421 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.551430 | orchestrator | 2025-09-04 00:58:26.551440 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-04 00:58:26.551449 | orchestrator | Thursday 04 September 2025 00:56:33 +0000 (0:00:00.686) 0:00:18.693 **** 2025-09-04 00:58:26.551459 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.551468 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551478 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551487 | orchestrator | 2025-09-04 00:58:26.551497 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-04 00:58:26.551506 | orchestrator | Thursday 04 September 2025 00:56:33 +0000 (0:00:00.281) 0:00:18.974 **** 2025-09-04 00:58:26.551515 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.551525 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551534 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551543 | orchestrator | 2025-09-04 00:58:26.551552 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-04 00:58:26.551562 | orchestrator | Thursday 04 September 2025 00:56:34 +0000 (0:00:00.378) 0:00:19.353 **** 2025-09-04 00:58:26.551571 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.551581 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551590 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551600 | orchestrator | 2025-09-04 00:58:26.551609 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-04 00:58:26.551618 | orchestrator | Thursday 04 September 2025 00:56:34 +0000 (0:00:00.520) 0:00:19.873 **** 2025-09-04 00:58:26.551628 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-04 00:58:26.551638 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-04 00:58:26.551651 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-04 00:58:26.551660 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-04 00:58:26.551670 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-04 00:58:26.551679 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-04 00:58:26.551694 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-04 00:58:26.551703 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-04 00:58:26.551712 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-04 00:58:26.551722 | orchestrator | 2025-09-04 00:58:26.551731 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-04 00:58:26.551741 | orchestrator | Thursday 04 September 2025 00:56:35 +0000 (0:00:00.878) 0:00:20.752 **** 2025-09-04 00:58:26.551750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-04 00:58:26.551760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-04 00:58:26.551769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-04 00:58:26.551778 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.551788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-04 00:58:26.551797 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-04 00:58:26.551807 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-04 00:58:26.551816 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-04 00:58:26.551834 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-04 00:58:26.551844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-04 00:58:26.551853 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551863 | orchestrator | 2025-09-04 00:58:26.551872 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-04 00:58:26.551882 | orchestrator | Thursday 04 September 2025 00:56:35 +0000 (0:00:00.326) 0:00:21.079 **** 2025-09-04 00:58:26.551892 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 00:58:26.551901 | orchestrator | 2025-09-04 00:58:26.551911 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-04 00:58:26.551921 | orchestrator | Thursday 04 September 2025 00:56:36 +0000 (0:00:00.696) 0:00:21.776 **** 2025-09-04 00:58:26.551931 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.551940 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.551949 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.551959 | orchestrator | 2025-09-04 00:58:26.551973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-04 00:58:26.551983 | orchestrator | Thursday 04 September 2025 00:56:36 +0000 (0:00:00.329) 0:00:22.106 **** 2025-09-04 00:58:26.551993 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552002 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.552012 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.552021 | orchestrator | 2025-09-04 00:58:26.552031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-04 00:58:26.552040 | orchestrator | Thursday 04 September 2025 00:56:37 +0000 (0:00:00.294) 0:00:22.400 **** 2025-09-04 00:58:26.552050 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552059 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.552083 | orchestrator | skipping: [testbed-node-5] 2025-09-04 00:58:26.552093 | orchestrator | 2025-09-04 00:58:26.552103 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-04 00:58:26.552112 | orchestrator | Thursday 04 September 2025 00:56:37 +0000 (0:00:00.301) 0:00:22.701 **** 2025-09-04 00:58:26.552121 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.552131 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.552140 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.552150 | orchestrator | 2025-09-04 00:58:26.552159 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-04 00:58:26.552169 | orchestrator | Thursday 04 September 2025 00:56:38 +0000 (0:00:00.562) 0:00:23.264 **** 2025-09-04 00:58:26.552178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:58:26.552193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:58:26.552203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:58:26.552212 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552222 | orchestrator | 2025-09-04 00:58:26.552232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-04 00:58:26.552241 | orchestrator | Thursday 04 September 2025 00:56:38 +0000 (0:00:00.371) 0:00:23.636 **** 2025-09-04 00:58:26.552251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:58:26.552260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:58:26.552269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:58:26.552279 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552288 | orchestrator | 2025-09-04 00:58:26.552298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-04 00:58:26.552307 | orchestrator | Thursday 04 September 2025 00:56:38 +0000 (0:00:00.353) 0:00:23.989 **** 2025-09-04 00:58:26.552317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-04 00:58:26.552326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-04 00:58:26.552336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-04 00:58:26.552345 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552354 | orchestrator | 2025-09-04 00:58:26.552364 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-04 00:58:26.552373 | orchestrator | Thursday 04 September 2025 00:56:39 +0000 (0:00:00.397) 0:00:24.387 **** 2025-09-04 00:58:26.552383 | orchestrator | ok: [testbed-node-3] 2025-09-04 00:58:26.552392 | orchestrator | ok: [testbed-node-4] 2025-09-04 00:58:26.552406 | orchestrator | ok: [testbed-node-5] 2025-09-04 00:58:26.552416 | orchestrator | 2025-09-04 00:58:26.552425 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-04 00:58:26.552435 | orchestrator | Thursday 04 September 2025 00:56:39 +0000 (0:00:00.320) 0:00:24.708 **** 2025-09-04 00:58:26.552444 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-04 00:58:26.552454 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-04 00:58:26.552463 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-04 00:58:26.552472 | orchestrator | 2025-09-04 00:58:26.552482 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-04 00:58:26.552491 | orchestrator | Thursday 04 September 2025 00:56:40 +0000 (0:00:00.520) 0:00:25.228 **** 2025-09-04 00:58:26.552501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:58:26.552510 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:58:26.552520 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:58:26.552529 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-04 00:58:26.552539 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-04 00:58:26.552548 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-04 00:58:26.552557 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-04 00:58:26.552567 | orchestrator | 2025-09-04 00:58:26.552576 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-04 00:58:26.552586 | orchestrator | Thursday 04 September 2025 00:56:41 +0000 (0:00:01.009) 0:00:26.238 **** 2025-09-04 00:58:26.552595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-04 00:58:26.552605 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-04 00:58:26.552615 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-04 00:58:26.552624 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-04 00:58:26.552642 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-04 00:58:26.552652 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-04 00:58:26.552662 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-04 00:58:26.552671 | orchestrator | 2025-09-04 00:58:26.552685 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-04 00:58:26.552695 | orchestrator | Thursday 04 September 2025 00:56:42 +0000 (0:00:01.963) 0:00:28.202 **** 2025-09-04 00:58:26.552704 | orchestrator | skipping: [testbed-node-3] 2025-09-04 00:58:26.552714 | orchestrator | skipping: [testbed-node-4] 2025-09-04 00:58:26.552723 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-04 00:58:26.552733 | orchestrator | 2025-09-04 00:58:26.552742 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-04 00:58:26.552752 | orchestrator | Thursday 04 September 2025 00:56:43 +0000 (0:00:00.383) 0:00:28.586 **** 2025-09-04 00:58:26.552762 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:58:26.552772 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:58:26.552783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:58:26.552793 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:58:26.552802 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-04 00:58:26.552812 | orchestrator | 2025-09-04 00:58:26.552822 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-04 00:58:26.552831 | orchestrator | Thursday 04 September 2025 00:57:30 +0000 (0:00:46.777) 0:01:15.363 **** 2025-09-04 00:58:26.552841 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552854 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552873 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552883 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552902 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-04 00:58:26.552911 | orchestrator | 2025-09-04 00:58:26.552920 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-04 00:58:26.552930 | orchestrator | Thursday 04 September 2025 00:57:54 +0000 (0:00:23.963) 0:01:39.326 **** 2025-09-04 00:58:26.552939 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552965 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552974 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552983 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.552993 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553002 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-04 00:58:26.553012 | orchestrator | 2025-09-04 00:58:26.553021 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-04 00:58:26.553030 | orchestrator | Thursday 04 September 2025 00:58:06 +0000 (0:00:12.403) 0:01:51.730 **** 2025-09-04 00:58:26.553040 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553049 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553058 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553092 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553102 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553126 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553135 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553154 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553164 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553173 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553183 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553192 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553201 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-04 00:58:26.553210 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-04 00:58:26.553220 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-04 00:58:26.553229 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-04 00:58:26.553239 | orchestrator | 2025-09-04 00:58:26.553248 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:58:26.553257 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-04 00:58:26.553268 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-04 00:58:26.553278 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-04 00:58:26.553287 | orchestrator | 2025-09-04 00:58:26.553297 | orchestrator | 2025-09-04 00:58:26.553306 | orchestrator | 2025-09-04 00:58:26.553315 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:58:26.553325 | orchestrator | Thursday 04 September 2025 00:58:24 +0000 (0:00:18.312) 0:02:10.042 **** 2025-09-04 00:58:26.553340 | orchestrator | =============================================================================== 2025-09-04 00:58:26.553349 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.78s 2025-09-04 00:58:26.553359 | orchestrator | generate keys ---------------------------------------------------------- 23.96s 2025-09-04 00:58:26.553368 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.31s 2025-09-04 00:58:26.553377 | orchestrator | get keys from monitors ------------------------------------------------- 12.40s 2025-09-04 00:58:26.553392 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2025-09-04 00:58:26.553401 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.96s 2025-09-04 00:58:26.553411 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.52s 2025-09-04 00:58:26.553420 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-09-04 00:58:26.553430 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-09-04 00:58:26.553439 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-09-04 00:58:26.553449 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-09-04 00:58:26.553458 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-09-04 00:58:26.553468 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2025-09-04 00:58:26.553477 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2025-09-04 00:58:26.553486 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-09-04 00:58:26.553496 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-09-04 00:58:26.553505 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-09-04 00:58:26.553514 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2025-09-04 00:58:26.553524 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2025-09-04 00:58:26.553533 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.56s 2025-09-04 00:58:26.553542 | orchestrator | 2025-09-04 00:58:26 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:26.553552 | orchestrator | 2025-09-04 00:58:26 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:26.553561 | orchestrator | 2025-09-04 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:29.618182 | orchestrator | 2025-09-04 00:58:29 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:29.620466 | orchestrator | 2025-09-04 00:58:29 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:29.622120 | orchestrator | 2025-09-04 00:58:29 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:29.622660 | orchestrator | 2025-09-04 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:32.673353 | orchestrator | 2025-09-04 00:58:32 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:32.675060 | orchestrator | 2025-09-04 00:58:32 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:32.676846 | orchestrator | 2025-09-04 00:58:32 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:32.676887 | orchestrator | 2025-09-04 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:35.723784 | orchestrator | 2025-09-04 00:58:35 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:35.725476 | orchestrator | 2025-09-04 00:58:35 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:35.726940 | orchestrator | 2025-09-04 00:58:35 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:35.727253 | orchestrator | 2025-09-04 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:38.768511 | orchestrator | 2025-09-04 00:58:38 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:38.769725 | orchestrator | 2025-09-04 00:58:38 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:38.773421 | orchestrator | 2025-09-04 00:58:38 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:38.773455 | orchestrator | 2025-09-04 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:41.833198 | orchestrator | 2025-09-04 00:58:41 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:41.834349 | orchestrator | 2025-09-04 00:58:41 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:41.835965 | orchestrator | 2025-09-04 00:58:41 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:41.836603 | orchestrator | 2025-09-04 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:44.888683 | orchestrator | 2025-09-04 00:58:44 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:44.891482 | orchestrator | 2025-09-04 00:58:44 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:44.892250 | orchestrator | 2025-09-04 00:58:44 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:44.892271 | orchestrator | 2025-09-04 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:47.946099 | orchestrator | 2025-09-04 00:58:47 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:47.949457 | orchestrator | 2025-09-04 00:58:47 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:47.951892 | orchestrator | 2025-09-04 00:58:47 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:47.951927 | orchestrator | 2025-09-04 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:51.008566 | orchestrator | 2025-09-04 00:58:51 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:51.010650 | orchestrator | 2025-09-04 00:58:51 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:51.013552 | orchestrator | 2025-09-04 00:58:51 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:51.014142 | orchestrator | 2025-09-04 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:54.061862 | orchestrator | 2025-09-04 00:58:54 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state STARTED 2025-09-04 00:58:54.063378 | orchestrator | 2025-09-04 00:58:54 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:54.065861 | orchestrator | 2025-09-04 00:58:54 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:54.065885 | orchestrator | 2025-09-04 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:58:57.124736 | orchestrator | 2025-09-04 00:58:57 | INFO  | Task 9705aa39-3916-4dab-af29-606ad5dea387 is in state SUCCESS 2025-09-04 00:58:57.126951 | orchestrator | 2025-09-04 00:58:57 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:58:57.130284 | orchestrator | 2025-09-04 00:58:57 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:58:57.132275 | orchestrator | 2025-09-04 00:58:57 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:58:57.132530 | orchestrator | 2025-09-04 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:00.191885 | orchestrator | 2025-09-04 00:59:00 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:00.193462 | orchestrator | 2025-09-04 00:59:00 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:00.195327 | orchestrator | 2025-09-04 00:59:00 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:59:00.196121 | orchestrator | 2025-09-04 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:03.244834 | orchestrator | 2025-09-04 00:59:03 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:03.246421 | orchestrator | 2025-09-04 00:59:03 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:03.248564 | orchestrator | 2025-09-04 00:59:03 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:59:03.248594 | orchestrator | 2025-09-04 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:06.293532 | orchestrator | 2025-09-04 00:59:06 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:06.294533 | orchestrator | 2025-09-04 00:59:06 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:06.296014 | orchestrator | 2025-09-04 00:59:06 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:59:06.296037 | orchestrator | 2025-09-04 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:09.335186 | orchestrator | 2025-09-04 00:59:09 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:09.335617 | orchestrator | 2025-09-04 00:59:09 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:09.336474 | orchestrator | 2025-09-04 00:59:09 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state STARTED 2025-09-04 00:59:09.336499 | orchestrator | 2025-09-04 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:12.386694 | orchestrator | 2025-09-04 00:59:12 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:12.387736 | orchestrator | 2025-09-04 00:59:12 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:12.391350 | orchestrator | 2025-09-04 00:59:12 | INFO  | Task 0a156da1-3458-4301-9ba4-212f7042484c is in state SUCCESS 2025-09-04 00:59:12.391708 | orchestrator | 2025-09-04 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:12.393970 | orchestrator | 2025-09-04 00:59:12.394004 | orchestrator | 2025-09-04 00:59:12.394087 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-04 00:59:12.394103 | orchestrator | 2025-09-04 00:59:12.394114 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-04 00:59:12.394126 | orchestrator | Thursday 04 September 2025 00:58:29 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-04 00:59:12.394192 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-04 00:59:12.394209 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394220 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394258 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 00:59:12.394535 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394552 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-04 00:59:12.394563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-04 00:59:12.394574 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-04 00:59:12.394585 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-04 00:59:12.394596 | orchestrator | 2025-09-04 00:59:12.394606 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-04 00:59:12.394617 | orchestrator | Thursday 04 September 2025 00:58:33 +0000 (0:00:04.136) 0:00:04.304 **** 2025-09-04 00:59:12.394629 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-04 00:59:12.394640 | orchestrator | 2025-09-04 00:59:12.394650 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-04 00:59:12.394661 | orchestrator | Thursday 04 September 2025 00:58:34 +0000 (0:00:00.960) 0:00:05.264 **** 2025-09-04 00:59:12.394672 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-04 00:59:12.394683 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394693 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394704 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 00:59:12.394715 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394725 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-04 00:59:12.394736 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-04 00:59:12.394746 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-04 00:59:12.394757 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-04 00:59:12.394768 | orchestrator | 2025-09-04 00:59:12.394778 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-04 00:59:12.394789 | orchestrator | Thursday 04 September 2025 00:58:47 +0000 (0:00:13.385) 0:00:18.650 **** 2025-09-04 00:59:12.394800 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-04 00:59:12.394811 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394821 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394832 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 00:59:12.394842 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-04 00:59:12.394853 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-04 00:59:12.394864 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-04 00:59:12.394874 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-04 00:59:12.394885 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-04 00:59:12.394895 | orchestrator | 2025-09-04 00:59:12.394906 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:59:12.394917 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 00:59:12.394928 | orchestrator | 2025-09-04 00:59:12.394939 | orchestrator | 2025-09-04 00:59:12.394950 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:59:12.394971 | orchestrator | Thursday 04 September 2025 00:58:54 +0000 (0:00:06.680) 0:00:25.330 **** 2025-09-04 00:59:12.394982 | orchestrator | =============================================================================== 2025-09-04 00:59:12.394992 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.39s 2025-09-04 00:59:12.395012 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.68s 2025-09-04 00:59:12.395023 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.14s 2025-09-04 00:59:12.395033 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-09-04 00:59:12.395044 | orchestrator | 2025-09-04 00:59:12.395055 | orchestrator | 2025-09-04 00:59:12.395088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 00:59:12.395099 | orchestrator | 2025-09-04 00:59:12.395121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 00:59:12.395132 | orchestrator | Thursday 04 September 2025 00:57:22 +0000 (0:00:00.263) 0:00:00.263 **** 2025-09-04 00:59:12.395143 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.395156 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.395168 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.395181 | orchestrator | 2025-09-04 00:59:12.395194 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 00:59:12.395206 | orchestrator | Thursday 04 September 2025 00:57:23 +0000 (0:00:00.324) 0:00:00.588 **** 2025-09-04 00:59:12.395219 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-04 00:59:12.395232 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-04 00:59:12.395245 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-04 00:59:12.395258 | orchestrator | 2025-09-04 00:59:12.395271 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-04 00:59:12.395283 | orchestrator | 2025-09-04 00:59:12.395296 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-04 00:59:12.395308 | orchestrator | Thursday 04 September 2025 00:57:23 +0000 (0:00:00.396) 0:00:00.984 **** 2025-09-04 00:59:12.395321 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:59:12.395333 | orchestrator | 2025-09-04 00:59:12.395345 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-04 00:59:12.395358 | orchestrator | Thursday 04 September 2025 00:57:24 +0000 (0:00:00.483) 0:00:01.468 **** 2025-09-04 00:59:12.395379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.395427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.395445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.395466 | orchestrator | 2025-09-04 00:59:12.395478 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-04 00:59:12.395491 | orchestrator | Thursday 04 September 2025 00:57:24 +0000 (0:00:00.949) 0:00:02.418 **** 2025-09-04 00:59:12.395504 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.395517 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.395530 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.395542 | orchestrator | 2025-09-04 00:59:12.395557 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-04 00:59:12.395568 | orchestrator | Thursday 04 September 2025 00:57:25 +0000 (0:00:00.463) 0:00:02.881 **** 2025-09-04 00:59:12.395579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-04 00:59:12.395590 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-04 00:59:12.395606 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-04 00:59:12.395617 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-04 00:59:12.395628 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-04 00:59:12.395639 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-04 00:59:12.395649 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-04 00:59:12.395660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-04 00:59:12.395670 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-04 00:59:12.395681 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-04 00:59:12.395692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-04 00:59:12.395702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-04 00:59:12.395713 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-04 00:59:12.395723 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-04 00:59:12.395734 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-04 00:59:12.395744 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-04 00:59:12.395755 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-04 00:59:12.395765 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-04 00:59:12.395776 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-04 00:59:12.395786 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-04 00:59:12.395797 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-04 00:59:12.395807 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-04 00:59:12.395824 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-04 00:59:12.395835 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-04 00:59:12.395846 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-04 00:59:12.395858 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-04 00:59:12.395869 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-04 00:59:12.395880 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-04 00:59:12.395891 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-04 00:59:12.395901 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-04 00:59:12.395912 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-04 00:59:12.395923 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-04 00:59:12.395933 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-04 00:59:12.395944 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-04 00:59:12.395955 | orchestrator | 2025-09-04 00:59:12.395966 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.395976 | orchestrator | Thursday 04 September 2025 00:57:26 +0000 (0:00:00.736) 0:00:03.618 **** 2025-09-04 00:59:12.395987 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.395998 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396008 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396019 | orchestrator | 2025-09-04 00:59:12.396034 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396045 | orchestrator | Thursday 04 September 2025 00:57:26 +0000 (0:00:00.304) 0:00:03.922 **** 2025-09-04 00:59:12.396056 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396096 | orchestrator | 2025-09-04 00:59:12.396107 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396123 | orchestrator | Thursday 04 September 2025 00:57:26 +0000 (0:00:00.133) 0:00:04.056 **** 2025-09-04 00:59:12.396135 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396145 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.396156 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.396166 | orchestrator | 2025-09-04 00:59:12.396177 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.396188 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.439) 0:00:04.496 **** 2025-09-04 00:59:12.396198 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.396209 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396220 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396230 | orchestrator | 2025-09-04 00:59:12.396241 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396252 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.302) 0:00:04.798 **** 2025-09-04 00:59:12.396269 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396280 | orchestrator | 2025-09-04 00:59:12.396290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396301 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.139) 0:00:04.937 **** 2025-09-04 00:59:12.396312 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396322 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.396333 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.396344 | orchestrator | 2025-09-04 00:59:12.396354 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.396365 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.294) 0:00:05.232 **** 2025-09-04 00:59:12.396375 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.396386 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396397 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396407 | orchestrator | 2025-09-04 00:59:12.396418 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396428 | orchestrator | Thursday 04 September 2025 00:57:28 +0000 (0:00:00.271) 0:00:05.504 **** 2025-09-04 00:59:12.396439 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396449 | orchestrator | 2025-09-04 00:59:12.396460 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396471 | orchestrator | Thursday 04 September 2025 00:57:28 +0000 (0:00:00.124) 0:00:05.629 **** 2025-09-04 00:59:12.396481 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396492 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.396503 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.396513 | orchestrator | 2025-09-04 00:59:12.396524 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.396534 | orchestrator | Thursday 04 September 2025 00:57:28 +0000 (0:00:00.480) 0:00:06.109 **** 2025-09-04 00:59:12.396545 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.396556 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396566 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396577 | orchestrator | 2025-09-04 00:59:12.396588 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396598 | orchestrator | Thursday 04 September 2025 00:57:29 +0000 (0:00:00.323) 0:00:06.433 **** 2025-09-04 00:59:12.396609 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396619 | orchestrator | 2025-09-04 00:59:12.396630 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396640 | orchestrator | Thursday 04 September 2025 00:57:29 +0000 (0:00:00.143) 0:00:06.577 **** 2025-09-04 00:59:12.396651 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396662 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.396672 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.396683 | orchestrator | 2025-09-04 00:59:12.396693 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.396704 | orchestrator | Thursday 04 September 2025 00:57:29 +0000 (0:00:00.295) 0:00:06.872 **** 2025-09-04 00:59:12.396714 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.396725 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396735 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396746 | orchestrator | 2025-09-04 00:59:12.396756 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396767 | orchestrator | Thursday 04 September 2025 00:57:29 +0000 (0:00:00.304) 0:00:07.177 **** 2025-09-04 00:59:12.396777 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396788 | orchestrator | 2025-09-04 00:59:12.396799 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396809 | orchestrator | Thursday 04 September 2025 00:57:30 +0000 (0:00:00.343) 0:00:07.521 **** 2025-09-04 00:59:12.396820 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396831 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.396841 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.396865 | orchestrator | 2025-09-04 00:59:12.396876 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.396887 | orchestrator | Thursday 04 September 2025 00:57:30 +0000 (0:00:00.289) 0:00:07.810 **** 2025-09-04 00:59:12.396898 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.396908 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.396919 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.396930 | orchestrator | 2025-09-04 00:59:12.396940 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.396951 | orchestrator | Thursday 04 September 2025 00:57:30 +0000 (0:00:00.313) 0:00:08.124 **** 2025-09-04 00:59:12.396962 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.396972 | orchestrator | 2025-09-04 00:59:12.396983 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.396993 | orchestrator | Thursday 04 September 2025 00:57:30 +0000 (0:00:00.123) 0:00:08.248 **** 2025-09-04 00:59:12.397004 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397027 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.397048 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.397177 | orchestrator | 2025-09-04 00:59:12.397191 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.397202 | orchestrator | Thursday 04 September 2025 00:57:31 +0000 (0:00:00.294) 0:00:08.542 **** 2025-09-04 00:59:12.397212 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.397223 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.397234 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.397244 | orchestrator | 2025-09-04 00:59:12.397262 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.397274 | orchestrator | Thursday 04 September 2025 00:57:31 +0000 (0:00:00.466) 0:00:09.009 **** 2025-09-04 00:59:12.397284 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397295 | orchestrator | 2025-09-04 00:59:12.397306 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.397317 | orchestrator | Thursday 04 September 2025 00:57:31 +0000 (0:00:00.129) 0:00:09.139 **** 2025-09-04 00:59:12.397327 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397338 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.397349 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.397360 | orchestrator | 2025-09-04 00:59:12.397370 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.397381 | orchestrator | Thursday 04 September 2025 00:57:31 +0000 (0:00:00.284) 0:00:09.423 **** 2025-09-04 00:59:12.397392 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.397402 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.397413 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.397424 | orchestrator | 2025-09-04 00:59:12.397434 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.397445 | orchestrator | Thursday 04 September 2025 00:57:32 +0000 (0:00:00.321) 0:00:09.744 **** 2025-09-04 00:59:12.397456 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397466 | orchestrator | 2025-09-04 00:59:12.397477 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.397488 | orchestrator | Thursday 04 September 2025 00:57:32 +0000 (0:00:00.129) 0:00:09.874 **** 2025-09-04 00:59:12.397499 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397509 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.397520 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.397531 | orchestrator | 2025-09-04 00:59:12.397541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.397552 | orchestrator | Thursday 04 September 2025 00:57:32 +0000 (0:00:00.313) 0:00:10.187 **** 2025-09-04 00:59:12.397563 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.397573 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.397584 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.397595 | orchestrator | 2025-09-04 00:59:12.397614 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.397626 | orchestrator | Thursday 04 September 2025 00:57:33 +0000 (0:00:00.475) 0:00:10.663 **** 2025-09-04 00:59:12.397636 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397647 | orchestrator | 2025-09-04 00:59:12.397658 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.397668 | orchestrator | Thursday 04 September 2025 00:57:33 +0000 (0:00:00.125) 0:00:10.788 **** 2025-09-04 00:59:12.397679 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397690 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.397700 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.397709 | orchestrator | 2025-09-04 00:59:12.397719 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-04 00:59:12.397728 | orchestrator | Thursday 04 September 2025 00:57:33 +0000 (0:00:00.329) 0:00:11.117 **** 2025-09-04 00:59:12.397738 | orchestrator | ok: [testbed-node-0] 2025-09-04 00:59:12.397747 | orchestrator | ok: [testbed-node-1] 2025-09-04 00:59:12.397757 | orchestrator | ok: [testbed-node-2] 2025-09-04 00:59:12.397766 | orchestrator | 2025-09-04 00:59:12.397776 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-04 00:59:12.397785 | orchestrator | Thursday 04 September 2025 00:57:34 +0000 (0:00:00.352) 0:00:11.470 **** 2025-09-04 00:59:12.397794 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397804 | orchestrator | 2025-09-04 00:59:12.397813 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-04 00:59:12.397823 | orchestrator | Thursday 04 September 2025 00:57:34 +0000 (0:00:00.121) 0:00:11.592 **** 2025-09-04 00:59:12.397832 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.397842 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.397851 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.397861 | orchestrator | 2025-09-04 00:59:12.397870 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-04 00:59:12.397880 | orchestrator | Thursday 04 September 2025 00:57:34 +0000 (0:00:00.510) 0:00:12.103 **** 2025-09-04 00:59:12.397889 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:59:12.397899 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:59:12.397908 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:59:12.397918 | orchestrator | 2025-09-04 00:59:12.397927 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-04 00:59:12.397937 | orchestrator | Thursday 04 September 2025 00:57:36 +0000 (0:00:01.831) 0:00:13.935 **** 2025-09-04 00:59:12.397946 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-04 00:59:12.397956 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-04 00:59:12.397966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-04 00:59:12.397975 | orchestrator | 2025-09-04 00:59:12.397985 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-04 00:59:12.397994 | orchestrator | Thursday 04 September 2025 00:57:38 +0000 (0:00:02.206) 0:00:16.141 **** 2025-09-04 00:59:12.398004 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-04 00:59:12.398048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-04 00:59:12.398078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-04 00:59:12.398088 | orchestrator | 2025-09-04 00:59:12.398098 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-04 00:59:12.398107 | orchestrator | Thursday 04 September 2025 00:57:40 +0000 (0:00:02.029) 0:00:18.170 **** 2025-09-04 00:59:12.398123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-04 00:59:12.398133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-04 00:59:12.398148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-04 00:59:12.398158 | orchestrator | 2025-09-04 00:59:12.398167 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-04 00:59:12.398177 | orchestrator | Thursday 04 September 2025 00:57:42 +0000 (0:00:01.915) 0:00:20.086 **** 2025-09-04 00:59:12.398186 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.398196 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.398205 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.398215 | orchestrator | 2025-09-04 00:59:12.398224 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-04 00:59:12.398234 | orchestrator | Thursday 04 September 2025 00:57:42 +0000 (0:00:00.302) 0:00:20.388 **** 2025-09-04 00:59:12.398243 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.398252 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.398261 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.398271 | orchestrator | 2025-09-04 00:59:12.398280 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-04 00:59:12.398289 | orchestrator | Thursday 04 September 2025 00:57:43 +0000 (0:00:00.316) 0:00:20.704 **** 2025-09-04 00:59:12.398299 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:59:12.398308 | orchestrator | 2025-09-04 00:59:12.398317 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-04 00:59:12.398327 | orchestrator | Thursday 04 September 2025 00:57:43 +0000 (0:00:00.575) 0:00:21.279 **** 2025-09-04 00:59:12.398338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398398 | orchestrator | 2025-09-04 00:59:12.398407 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-04 00:59:12.398421 | orchestrator | Thursday 04 September 2025 00:57:45 +0000 (0:00:01.831) 0:00:23.111 **** 2025-09-04 00:59:12.398440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398452 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.398468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398489 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.398500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398511 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.398520 | orchestrator | 2025-09-04 00:59:12.398530 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-04 00:59:12.398539 | orchestrator | Thursday 04 September 2025 00:57:46 +0000 (0:00:00.585) 0:00:23.696 **** 2025-09-04 00:59:12.398561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398578 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.398589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398599 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.398622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-04 00:59:12.398640 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.398650 | orchestrator | 2025-09-04 00:59:12.398659 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-04 00:59:12.398669 | orchestrator | Thursday 04 September 2025 00:57:47 +0000 (0:00:00.804) 0:00:24.500 **** 2025-09-04 00:59:12.398679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-04 00:59:12.398741 | orchestrator | 2025-09-04 00:59:12.398750 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-04 00:59:12.398760 | orchestrator | Thursday 04 September 2025 00:57:48 +0000 (0:00:01.693) 0:00:26.194 **** 2025-09-04 00:59:12.398769 | orchestrator | skipping: [testbed-node-0] 2025-09-04 00:59:12.398779 | orchestrator | skipping: [testbed-node-1] 2025-09-04 00:59:12.398788 | orchestrator | skipping: [testbed-node-2] 2025-09-04 00:59:12.398797 | orchestrator | 2025-09-04 00:59:12.398807 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-04 00:59:12.398816 | orchestrator | Thursday 04 September 2025 00:57:49 +0000 (0:00:00.315) 0:00:26.509 **** 2025-09-04 00:59:12.398830 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 00:59:12.398840 | orchestrator | 2025-09-04 00:59:12.398849 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-04 00:59:12.398858 | orchestrator | Thursday 04 September 2025 00:57:49 +0000 (0:00:00.507) 0:00:27.016 **** 2025-09-04 00:59:12.398868 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:59:12.398877 | orchestrator | 2025-09-04 00:59:12.398891 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-04 00:59:12.398901 | orchestrator | Thursday 04 September 2025 00:57:51 +0000 (0:00:02.254) 0:00:29.271 **** 2025-09-04 00:59:12.398910 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:59:12.398920 | orchestrator | 2025-09-04 00:59:12.398929 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-04 00:59:12.398938 | orchestrator | Thursday 04 September 2025 00:57:54 +0000 (0:00:02.574) 0:00:31.846 **** 2025-09-04 00:59:12.398948 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:59:12.398957 | orchestrator | 2025-09-04 00:59:12.398967 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-04 00:59:12.398976 | orchestrator | Thursday 04 September 2025 00:58:10 +0000 (0:00:16.111) 0:00:47.957 **** 2025-09-04 00:59:12.398985 | orchestrator | 2025-09-04 00:59:12.398994 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-04 00:59:12.399004 | orchestrator | Thursday 04 September 2025 00:58:10 +0000 (0:00:00.075) 0:00:48.033 **** 2025-09-04 00:59:12.399013 | orchestrator | 2025-09-04 00:59:12.399022 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-04 00:59:12.399032 | orchestrator | Thursday 04 September 2025 00:58:10 +0000 (0:00:00.061) 0:00:48.095 **** 2025-09-04 00:59:12.399041 | orchestrator | 2025-09-04 00:59:12.399050 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-04 00:59:12.399098 | orchestrator | Thursday 04 September 2025 00:58:10 +0000 (0:00:00.071) 0:00:48.166 **** 2025-09-04 00:59:12.399109 | orchestrator | changed: [testbed-node-0] 2025-09-04 00:59:12.399119 | orchestrator | changed: [testbed-node-1] 2025-09-04 00:59:12.399129 | orchestrator | changed: [testbed-node-2] 2025-09-04 00:59:12.399138 | orchestrator | 2025-09-04 00:59:12.399147 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 00:59:12.399157 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-04 00:59:12.399167 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-04 00:59:12.399176 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-04 00:59:12.399192 | orchestrator | 2025-09-04 00:59:12.399202 | orchestrator | 2025-09-04 00:59:12.399211 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 00:59:12.399221 | orchestrator | Thursday 04 September 2025 00:59:09 +0000 (0:00:58.491) 0:01:46.657 **** 2025-09-04 00:59:12.399230 | orchestrator | =============================================================================== 2025-09-04 00:59:12.399240 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.49s 2025-09-04 00:59:12.399249 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.11s 2025-09-04 00:59:12.399259 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.57s 2025-09-04 00:59:12.399268 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.25s 2025-09-04 00:59:12.399277 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.21s 2025-09-04 00:59:12.399287 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.03s 2025-09-04 00:59:12.399296 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.92s 2025-09-04 00:59:12.399306 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2025-09-04 00:59:12.399315 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2025-09-04 00:59:12.399323 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.69s 2025-09-04 00:59:12.399331 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.95s 2025-09-04 00:59:12.399339 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2025-09-04 00:59:12.399346 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-09-04 00:59:12.399354 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.59s 2025-09-04 00:59:12.399362 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-09-04 00:59:12.399369 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-09-04 00:59:12.399377 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2025-09-04 00:59:12.399384 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-09-04 00:59:12.399392 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-09-04 00:59:12.399400 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2025-09-04 00:59:15.446908 | orchestrator | 2025-09-04 00:59:15 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:15.449602 | orchestrator | 2025-09-04 00:59:15 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:15.450263 | orchestrator | 2025-09-04 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:18.497391 | orchestrator | 2025-09-04 00:59:18 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:18.498733 | orchestrator | 2025-09-04 00:59:18 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:18.498972 | orchestrator | 2025-09-04 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:21.548028 | orchestrator | 2025-09-04 00:59:21 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:21.549137 | orchestrator | 2025-09-04 00:59:21 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:21.549170 | orchestrator | 2025-09-04 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:24.590558 | orchestrator | 2025-09-04 00:59:24 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:24.592548 | orchestrator | 2025-09-04 00:59:24 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:24.592579 | orchestrator | 2025-09-04 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:27.638721 | orchestrator | 2025-09-04 00:59:27 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:27.640023 | orchestrator | 2025-09-04 00:59:27 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:27.640051 | orchestrator | 2025-09-04 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:30.688139 | orchestrator | 2025-09-04 00:59:30 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:30.689403 | orchestrator | 2025-09-04 00:59:30 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:30.689430 | orchestrator | 2025-09-04 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:33.728982 | orchestrator | 2025-09-04 00:59:33 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:33.730813 | orchestrator | 2025-09-04 00:59:33 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:33.730848 | orchestrator | 2025-09-04 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:36.774949 | orchestrator | 2025-09-04 00:59:36 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:36.775765 | orchestrator | 2025-09-04 00:59:36 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:36.775892 | orchestrator | 2025-09-04 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:39.821926 | orchestrator | 2025-09-04 00:59:39 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:39.823997 | orchestrator | 2025-09-04 00:59:39 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:39.824121 | orchestrator | 2025-09-04 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:42.871952 | orchestrator | 2025-09-04 00:59:42 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:42.873784 | orchestrator | 2025-09-04 00:59:42 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:42.873815 | orchestrator | 2025-09-04 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:45.920523 | orchestrator | 2025-09-04 00:59:45 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:45.922228 | orchestrator | 2025-09-04 00:59:45 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:45.922859 | orchestrator | 2025-09-04 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:48.967708 | orchestrator | 2025-09-04 00:59:48 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:48.968381 | orchestrator | 2025-09-04 00:59:48 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state STARTED 2025-09-04 00:59:48.968412 | orchestrator | 2025-09-04 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:52.018825 | orchestrator | 2025-09-04 00:59:52 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 00:59:52.019908 | orchestrator | 2025-09-04 00:59:52 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 00:59:52.021714 | orchestrator | 2025-09-04 00:59:52 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:52.024154 | orchestrator | 2025-09-04 00:59:52 | INFO  | Task 6726ac33-a00a-442f-baec-2341940a9321 is in state SUCCESS 2025-09-04 00:59:52.025524 | orchestrator | 2025-09-04 00:59:52 | INFO  | Task 07e35ae1-f014-42f1-b222-3a0928784f03 is in state STARTED 2025-09-04 00:59:52.025626 | orchestrator | 2025-09-04 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:55.072480 | orchestrator | 2025-09-04 00:59:55 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 00:59:55.073398 | orchestrator | 2025-09-04 00:59:55 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 00:59:55.074443 | orchestrator | 2025-09-04 00:59:55 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:55.075547 | orchestrator | 2025-09-04 00:59:55 | INFO  | Task 07e35ae1-f014-42f1-b222-3a0928784f03 is in state STARTED 2025-09-04 00:59:55.075770 | orchestrator | 2025-09-04 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 00:59:58.119338 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 00:59:58.119767 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 00:59:58.120479 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 00:59:58.121542 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 00:59:58.122285 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task 07e35ae1-f014-42f1-b222-3a0928784f03 is in state SUCCESS 2025-09-04 00:59:58.122996 | orchestrator | 2025-09-04 00:59:58 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 00:59:58.123231 | orchestrator | 2025-09-04 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:01.265962 | orchestrator | 2025-09-04 01:00:01 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:01.266771 | orchestrator | 2025-09-04 01:00:01 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:01.268151 | orchestrator | 2025-09-04 01:00:01 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:01.268254 | orchestrator | 2025-09-04 01:00:01 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 01:00:01.269601 | orchestrator | 2025-09-04 01:00:01 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:01.269625 | orchestrator | 2025-09-04 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:04.314610 | orchestrator | 2025-09-04 01:00:04 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:04.315231 | orchestrator | 2025-09-04 01:00:04 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:04.315825 | orchestrator | 2025-09-04 01:00:04 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:04.317281 | orchestrator | 2025-09-04 01:00:04 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state STARTED 2025-09-04 01:00:04.317778 | orchestrator | 2025-09-04 01:00:04 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:04.318451 | orchestrator | 2025-09-04 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:07.363914 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:07.365518 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:07.365779 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:07.367971 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task 6953149f-bd05-4614-82da-41768948ac2a is in state SUCCESS 2025-09-04 01:00:07.369913 | orchestrator | 2025-09-04 01:00:07.369944 | orchestrator | 2025-09-04 01:00:07.369956 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-04 01:00:07.369967 | orchestrator | 2025-09-04 01:00:07.369978 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-04 01:00:07.369989 | orchestrator | Thursday 04 September 2025 00:58:58 +0000 (0:00:00.234) 0:00:00.234 **** 2025-09-04 01:00:07.370000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-04 01:00:07.370012 | orchestrator | 2025-09-04 01:00:07.370096 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-04 01:00:07.370109 | orchestrator | Thursday 04 September 2025 00:58:59 +0000 (0:00:00.227) 0:00:00.462 **** 2025-09-04 01:00:07.370120 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-04 01:00:07.370131 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-04 01:00:07.370143 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-04 01:00:07.370154 | orchestrator | 2025-09-04 01:00:07.370164 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-04 01:00:07.370175 | orchestrator | Thursday 04 September 2025 00:59:00 +0000 (0:00:01.276) 0:00:01.738 **** 2025-09-04 01:00:07.370186 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-04 01:00:07.370197 | orchestrator | 2025-09-04 01:00:07.370208 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-04 01:00:07.370219 | orchestrator | Thursday 04 September 2025 00:59:01 +0000 (0:00:01.145) 0:00:02.884 **** 2025-09-04 01:00:07.370230 | orchestrator | changed: [testbed-manager] 2025-09-04 01:00:07.370241 | orchestrator | 2025-09-04 01:00:07.370252 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-04 01:00:07.370263 | orchestrator | Thursday 04 September 2025 00:59:02 +0000 (0:00:00.954) 0:00:03.839 **** 2025-09-04 01:00:07.370274 | orchestrator | changed: [testbed-manager] 2025-09-04 01:00:07.370285 | orchestrator | 2025-09-04 01:00:07.370296 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-04 01:00:07.370307 | orchestrator | Thursday 04 September 2025 00:59:03 +0000 (0:00:00.984) 0:00:04.824 **** 2025-09-04 01:00:07.370317 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-04 01:00:07.370328 | orchestrator | ok: [testbed-manager] 2025-09-04 01:00:07.370339 | orchestrator | 2025-09-04 01:00:07.370350 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-04 01:00:07.370360 | orchestrator | Thursday 04 September 2025 00:59:38 +0000 (0:00:35.506) 0:00:40.330 **** 2025-09-04 01:00:07.370371 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-04 01:00:07.370383 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-04 01:00:07.370393 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-04 01:00:07.370404 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-04 01:00:07.370415 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-04 01:00:07.370426 | orchestrator | 2025-09-04 01:00:07.370437 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-04 01:00:07.370447 | orchestrator | Thursday 04 September 2025 00:59:43 +0000 (0:00:04.105) 0:00:44.435 **** 2025-09-04 01:00:07.370458 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-04 01:00:07.370469 | orchestrator | 2025-09-04 01:00:07.370480 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-04 01:00:07.371032 | orchestrator | Thursday 04 September 2025 00:59:43 +0000 (0:00:00.462) 0:00:44.898 **** 2025-09-04 01:00:07.371070 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:00:07.371082 | orchestrator | 2025-09-04 01:00:07.371093 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-04 01:00:07.371103 | orchestrator | Thursday 04 September 2025 00:59:43 +0000 (0:00:00.133) 0:00:45.031 **** 2025-09-04 01:00:07.371114 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:00:07.371125 | orchestrator | 2025-09-04 01:00:07.371135 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-04 01:00:07.371146 | orchestrator | Thursday 04 September 2025 00:59:43 +0000 (0:00:00.309) 0:00:45.340 **** 2025-09-04 01:00:07.371156 | orchestrator | changed: [testbed-manager] 2025-09-04 01:00:07.371167 | orchestrator | 2025-09-04 01:00:07.371178 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-04 01:00:07.371188 | orchestrator | Thursday 04 September 2025 00:59:45 +0000 (0:00:01.997) 0:00:47.338 **** 2025-09-04 01:00:07.371199 | orchestrator | changed: [testbed-manager] 2025-09-04 01:00:07.371210 | orchestrator | 2025-09-04 01:00:07.371220 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-04 01:00:07.371231 | orchestrator | Thursday 04 September 2025 00:59:46 +0000 (0:00:00.764) 0:00:48.103 **** 2025-09-04 01:00:07.371242 | orchestrator | changed: [testbed-manager] 2025-09-04 01:00:07.371252 | orchestrator | 2025-09-04 01:00:07.371263 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-04 01:00:07.371273 | orchestrator | Thursday 04 September 2025 00:59:47 +0000 (0:00:00.686) 0:00:48.790 **** 2025-09-04 01:00:07.371284 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-04 01:00:07.371313 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-04 01:00:07.371325 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-04 01:00:07.371336 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-04 01:00:07.371347 | orchestrator | 2025-09-04 01:00:07.371358 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:00:07.371369 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-04 01:00:07.371380 | orchestrator | 2025-09-04 01:00:07.371391 | orchestrator | 2025-09-04 01:00:07.371444 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:00:07.371457 | orchestrator | Thursday 04 September 2025 00:59:48 +0000 (0:00:01.507) 0:00:50.297 **** 2025-09-04 01:00:07.371468 | orchestrator | =============================================================================== 2025-09-04 01:00:07.371479 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.51s 2025-09-04 01:00:07.371490 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.11s 2025-09-04 01:00:07.371500 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.00s 2025-09-04 01:00:07.371518 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.51s 2025-09-04 01:00:07.371529 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.28s 2025-09-04 01:00:07.371540 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-09-04 01:00:07.371551 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2025-09-04 01:00:07.371561 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-09-04 01:00:07.371572 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-09-04 01:00:07.371582 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.69s 2025-09-04 01:00:07.371593 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-09-04 01:00:07.371604 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-09-04 01:00:07.371623 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-04 01:00:07.371636 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-04 01:00:07.371649 | orchestrator | 2025-09-04 01:00:07.371661 | orchestrator | 2025-09-04 01:00:07.371675 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:00:07.371687 | orchestrator | 2025-09-04 01:00:07.371699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:00:07.371712 | orchestrator | Thursday 04 September 2025 00:59:53 +0000 (0:00:00.181) 0:00:00.181 **** 2025-09-04 01:00:07.371725 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.371738 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.371750 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.371763 | orchestrator | 2025-09-04 01:00:07.371777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:00:07.371791 | orchestrator | Thursday 04 September 2025 00:59:53 +0000 (0:00:00.298) 0:00:00.479 **** 2025-09-04 01:00:07.371804 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-04 01:00:07.371818 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-04 01:00:07.371832 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-04 01:00:07.371845 | orchestrator | 2025-09-04 01:00:07.371859 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-04 01:00:07.371872 | orchestrator | 2025-09-04 01:00:07.371883 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-04 01:00:07.371893 | orchestrator | Thursday 04 September 2025 00:59:54 +0000 (0:00:00.784) 0:00:01.263 **** 2025-09-04 01:00:07.371904 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.371915 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.371926 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.371936 | orchestrator | 2025-09-04 01:00:07.371947 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:00:07.371959 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:00:07.371970 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:00:07.371981 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:00:07.371992 | orchestrator | 2025-09-04 01:00:07.372003 | orchestrator | 2025-09-04 01:00:07.372014 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:00:07.372024 | orchestrator | Thursday 04 September 2025 00:59:55 +0000 (0:00:00.754) 0:00:02.018 **** 2025-09-04 01:00:07.372035 | orchestrator | =============================================================================== 2025-09-04 01:00:07.372080 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2025-09-04 01:00:07.372092 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2025-09-04 01:00:07.372103 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-04 01:00:07.372113 | orchestrator | 2025-09-04 01:00:07.372124 | orchestrator | 2025-09-04 01:00:07.372135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:00:07.372146 | orchestrator | 2025-09-04 01:00:07.372156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:00:07.372167 | orchestrator | Thursday 04 September 2025 00:57:22 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-04 01:00:07.372178 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.372189 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.372200 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.372210 | orchestrator | 2025-09-04 01:00:07.372221 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:00:07.372232 | orchestrator | Thursday 04 September 2025 00:57:23 +0000 (0:00:00.321) 0:00:00.598 **** 2025-09-04 01:00:07.372249 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-04 01:00:07.372260 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-04 01:00:07.372272 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-04 01:00:07.372283 | orchestrator | 2025-09-04 01:00:07.372293 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-04 01:00:07.372304 | orchestrator | 2025-09-04 01:00:07.372349 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.372362 | orchestrator | Thursday 04 September 2025 00:57:23 +0000 (0:00:00.427) 0:00:01.025 **** 2025-09-04 01:00:07.372373 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:00:07.372384 | orchestrator | 2025-09-04 01:00:07.372394 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-04 01:00:07.372405 | orchestrator | Thursday 04 September 2025 00:57:24 +0000 (0:00:00.555) 0:00:01.581 **** 2025-09-04 01:00:07.372442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.372460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.372545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.372567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.372772 | orchestrator | 2025-09-04 01:00:07.372784 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-04 01:00:07.372802 | orchestrator | Thursday 04 September 2025 00:57:25 +0000 (0:00:01.723) 0:00:03.304 **** 2025-09-04 01:00:07.372813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-04 01:00:07.372824 | orchestrator | 2025-09-04 01:00:07.372835 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-04 01:00:07.372846 | orchestrator | Thursday 04 September 2025 00:57:26 +0000 (0:00:00.839) 0:00:04.144 **** 2025-09-04 01:00:07.372857 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.372867 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.372878 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.372889 | orchestrator | 2025-09-04 01:00:07.372900 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-04 01:00:07.372910 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.443) 0:00:04.587 **** 2025-09-04 01:00:07.372921 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:00:07.372932 | orchestrator | 2025-09-04 01:00:07.372942 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.372953 | orchestrator | Thursday 04 September 2025 00:57:27 +0000 (0:00:00.694) 0:00:05.282 **** 2025-09-04 01:00:07.372964 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:00:07.372974 | orchestrator | 2025-09-04 01:00:07.372993 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-04 01:00:07.373004 | orchestrator | Thursday 04 September 2025 00:57:28 +0000 (0:00:00.521) 0:00:05.804 **** 2025-09-04 01:00:07.373021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.373034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.373095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.373117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.373222 | orchestrator | 2025-09-04 01:00:07.373242 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-04 01:00:07.373260 | orchestrator | Thursday 04 September 2025 00:57:31 +0000 (0:00:03.126) 0:00:08.930 **** 2025-09-04 01:00:07.373280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373353 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.373370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373433 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.373458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373518 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.373534 | orchestrator | 2025-09-04 01:00:07.373550 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-04 01:00:07.373567 | orchestrator | Thursday 04 September 2025 00:57:32 +0000 (0:00:00.733) 0:00:09.663 **** 2025-09-04 01:00:07.373584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373645 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.373676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373737 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.373754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-04 01:00:07.373780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.373807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-04 01:00:07.373824 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.373841 | orchestrator | 2025-09-04 01:00:07.373872 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-04 01:00:07.373942 | orchestrator | Thursday 04 September 2025 00:57:33 +0000 (0:00:00.732) 0:00:10.396 **** 2025-09-04 01:00:07.373961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.373997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.374107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.374139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374299 | orchestrator | 2025-09-04 01:00:07.374314 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-04 01:00:07.374367 | orchestrator | Thursday 04 September 2025 00:57:36 +0000 (0:00:03.379) 0:00:13.775 **** 2025-09-04 01:00:07.374396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.374420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.374454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.374487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.374517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.374539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.374572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.374757 | orchestrator | 2025-09-04 01:00:07.374793 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-04 01:00:07.374823 | orchestrator | Thursday 04 September 2025 00:57:41 +0000 (0:00:05.317) 0:00:19.093 **** 2025-09-04 01:00:07.374860 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.374893 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:00:07.374931 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:00:07.374954 | orchestrator | 2025-09-04 01:00:07.374994 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-04 01:00:07.375037 | orchestrator | Thursday 04 September 2025 00:57:43 +0000 (0:00:01.408) 0:00:20.501 **** 2025-09-04 01:00:07.375082 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.375115 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.375140 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.375170 | orchestrator | 2025-09-04 01:00:07.375188 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-04 01:00:07.375205 | orchestrator | Thursday 04 September 2025 00:57:43 +0000 (0:00:00.488) 0:00:20.990 **** 2025-09-04 01:00:07.375221 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.375238 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.375256 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.375280 | orchestrator | 2025-09-04 01:00:07.375303 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-04 01:00:07.375330 | orchestrator | Thursday 04 September 2025 00:57:43 +0000 (0:00:00.290) 0:00:21.280 **** 2025-09-04 01:00:07.375358 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.375387 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.375410 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.375425 | orchestrator | 2025-09-04 01:00:07.375440 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-04 01:00:07.375455 | orchestrator | Thursday 04 September 2025 00:57:44 +0000 (0:00:00.556) 0:00:21.836 **** 2025-09-04 01:00:07.375472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.375513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.375532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.375555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.375584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.375607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-04 01:00:07.375641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.375686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.375714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.375733 | orchestrator | 2025-09-04 01:00:07.375751 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.375772 | orchestrator | Thursday 04 September 2025 00:57:46 +0000 (0:00:02.376) 0:00:24.213 **** 2025-09-04 01:00:07.375794 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.375813 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.375831 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.375847 | orchestrator | 2025-09-04 01:00:07.375865 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-04 01:00:07.375885 | orchestrator | Thursday 04 September 2025 00:57:47 +0000 (0:00:00.319) 0:00:24.533 **** 2025-09-04 01:00:07.375903 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-04 01:00:07.375922 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-04 01:00:07.375938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-04 01:00:07.375948 | orchestrator | 2025-09-04 01:00:07.375959 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-04 01:00:07.375969 | orchestrator | Thursday 04 September 2025 00:57:49 +0000 (0:00:01.837) 0:00:26.371 **** 2025-09-04 01:00:07.375980 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:00:07.375991 | orchestrator | 2025-09-04 01:00:07.376001 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-04 01:00:07.376012 | orchestrator | Thursday 04 September 2025 00:57:49 +0000 (0:00:00.886) 0:00:27.257 **** 2025-09-04 01:00:07.376023 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.376033 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.376044 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.376110 | orchestrator | 2025-09-04 01:00:07.376121 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-04 01:00:07.376131 | orchestrator | Thursday 04 September 2025 00:57:50 +0000 (0:00:00.740) 0:00:27.998 **** 2025-09-04 01:00:07.376142 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:00:07.376161 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-04 01:00:07.376172 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-04 01:00:07.376182 | orchestrator | 2025-09-04 01:00:07.376193 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-04 01:00:07.376204 | orchestrator | Thursday 04 September 2025 00:57:51 +0000 (0:00:01.005) 0:00:29.003 **** 2025-09-04 01:00:07.376215 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.376226 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.376236 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.376247 | orchestrator | 2025-09-04 01:00:07.376258 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-04 01:00:07.376268 | orchestrator | Thursday 04 September 2025 00:57:51 +0000 (0:00:00.297) 0:00:29.300 **** 2025-09-04 01:00:07.376279 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-04 01:00:07.376290 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-04 01:00:07.376300 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-04 01:00:07.376311 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-04 01:00:07.376321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-04 01:00:07.376340 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-04 01:00:07.376351 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-04 01:00:07.376362 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-04 01:00:07.376373 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-04 01:00:07.376389 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-04 01:00:07.376400 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-04 01:00:07.376410 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-04 01:00:07.376421 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-04 01:00:07.376432 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-04 01:00:07.376443 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-04 01:00:07.376454 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:00:07.376465 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:00:07.376475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:00:07.376486 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:00:07.376497 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:00:07.376507 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:00:07.376518 | orchestrator | 2025-09-04 01:00:07.376529 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-04 01:00:07.376539 | orchestrator | Thursday 04 September 2025 00:58:00 +0000 (0:00:08.906) 0:00:38.207 **** 2025-09-04 01:00:07.376550 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:00:07.376561 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:00:07.376571 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:00:07.376589 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:00:07.376600 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:00:07.376611 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:00:07.376621 | orchestrator | 2025-09-04 01:00:07.376631 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-04 01:00:07.376640 | orchestrator | Thursday 04 September 2025 00:58:03 +0000 (0:00:02.861) 0:00:41.068 **** 2025-09-04 01:00:07.376651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.376668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.376684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-04 01:00:07.376695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-04 01:00:07.376776 | orchestrator | 2025-09-04 01:00:07.376785 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.376795 | orchestrator | Thursday 04 September 2025 00:58:06 +0000 (0:00:02.354) 0:00:43.423 **** 2025-09-04 01:00:07.376805 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.376814 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.376824 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.376834 | orchestrator | 2025-09-04 01:00:07.376843 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-04 01:00:07.376857 | orchestrator | Thursday 04 September 2025 00:58:06 +0000 (0:00:00.327) 0:00:43.751 **** 2025-09-04 01:00:07.376867 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.376876 | orchestrator | 2025-09-04 01:00:07.376886 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-04 01:00:07.376895 | orchestrator | Thursday 04 September 2025 00:58:08 +0000 (0:00:02.251) 0:00:46.002 **** 2025-09-04 01:00:07.376905 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.376914 | orchestrator | 2025-09-04 01:00:07.376924 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-04 01:00:07.376933 | orchestrator | Thursday 04 September 2025 00:58:10 +0000 (0:00:02.167) 0:00:48.169 **** 2025-09-04 01:00:07.376942 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.376952 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.376962 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.376971 | orchestrator | 2025-09-04 01:00:07.376981 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-04 01:00:07.376990 | orchestrator | Thursday 04 September 2025 00:58:11 +0000 (0:00:01.030) 0:00:49.200 **** 2025-09-04 01:00:07.377000 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.377009 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.377019 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.377029 | orchestrator | 2025-09-04 01:00:07.377038 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-04 01:00:07.377062 | orchestrator | Thursday 04 September 2025 00:58:12 +0000 (0:00:00.592) 0:00:49.792 **** 2025-09-04 01:00:07.377072 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377081 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.377091 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.377101 | orchestrator | 2025-09-04 01:00:07.377110 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-04 01:00:07.377120 | orchestrator | Thursday 04 September 2025 00:58:12 +0000 (0:00:00.366) 0:00:50.159 **** 2025-09-04 01:00:07.377129 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377139 | orchestrator | 2025-09-04 01:00:07.377148 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-04 01:00:07.377158 | orchestrator | Thursday 04 September 2025 00:58:26 +0000 (0:00:13.438) 0:01:03.597 **** 2025-09-04 01:00:07.377167 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377177 | orchestrator | 2025-09-04 01:00:07.377186 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-04 01:00:07.377196 | orchestrator | Thursday 04 September 2025 00:58:35 +0000 (0:00:09.310) 0:01:12.907 **** 2025-09-04 01:00:07.377206 | orchestrator | 2025-09-04 01:00:07.377215 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-04 01:00:07.377224 | orchestrator | Thursday 04 September 2025 00:58:35 +0000 (0:00:00.071) 0:01:12.979 **** 2025-09-04 01:00:07.377234 | orchestrator | 2025-09-04 01:00:07.377243 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-04 01:00:07.377253 | orchestrator | Thursday 04 September 2025 00:58:35 +0000 (0:00:00.067) 0:01:13.046 **** 2025-09-04 01:00:07.377262 | orchestrator | 2025-09-04 01:00:07.377272 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-04 01:00:07.377281 | orchestrator | Thursday 04 September 2025 00:58:35 +0000 (0:00:00.073) 0:01:13.119 **** 2025-09-04 01:00:07.377291 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377300 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:00:07.377310 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:00:07.377319 | orchestrator | 2025-09-04 01:00:07.377329 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-04 01:00:07.377338 | orchestrator | Thursday 04 September 2025 00:59:00 +0000 (0:00:25.036) 0:01:38.156 **** 2025-09-04 01:00:07.377348 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:00:07.377357 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:00:07.377372 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377382 | orchestrator | 2025-09-04 01:00:07.377391 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-04 01:00:07.377401 | orchestrator | Thursday 04 September 2025 00:59:08 +0000 (0:00:07.528) 0:01:45.684 **** 2025-09-04 01:00:07.377410 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377420 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:00:07.377435 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:00:07.377445 | orchestrator | 2025-09-04 01:00:07.377454 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.377464 | orchestrator | Thursday 04 September 2025 00:59:15 +0000 (0:00:07.237) 0:01:52.921 **** 2025-09-04 01:00:07.377473 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:00:07.377483 | orchestrator | 2025-09-04 01:00:07.377492 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-04 01:00:07.377506 | orchestrator | Thursday 04 September 2025 00:59:16 +0000 (0:00:00.703) 0:01:53.625 **** 2025-09-04 01:00:07.377515 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.377525 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:00:07.377535 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:00:07.377545 | orchestrator | 2025-09-04 01:00:07.377554 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-04 01:00:07.377564 | orchestrator | Thursday 04 September 2025 00:59:17 +0000 (0:00:00.787) 0:01:54.412 **** 2025-09-04 01:00:07.377573 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:00:07.377583 | orchestrator | 2025-09-04 01:00:07.377592 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-04 01:00:07.377602 | orchestrator | Thursday 04 September 2025 00:59:18 +0000 (0:00:01.758) 0:01:56.170 **** 2025-09-04 01:00:07.377612 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-04 01:00:07.377621 | orchestrator | 2025-09-04 01:00:07.377631 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-04 01:00:07.377641 | orchestrator | Thursday 04 September 2025 00:59:29 +0000 (0:00:10.470) 0:02:06.641 **** 2025-09-04 01:00:07.377650 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-04 01:00:07.377660 | orchestrator | 2025-09-04 01:00:07.377669 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-04 01:00:07.377679 | orchestrator | Thursday 04 September 2025 00:59:51 +0000 (0:00:21.826) 0:02:28.467 **** 2025-09-04 01:00:07.377688 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-04 01:00:07.377698 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-04 01:00:07.377707 | orchestrator | 2025-09-04 01:00:07.377717 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-04 01:00:07.377726 | orchestrator | Thursday 04 September 2025 00:59:58 +0000 (0:00:07.255) 0:02:35.723 **** 2025-09-04 01:00:07.377736 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377745 | orchestrator | 2025-09-04 01:00:07.377755 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-04 01:00:07.377764 | orchestrator | Thursday 04 September 2025 00:59:58 +0000 (0:00:00.333) 0:02:36.057 **** 2025-09-04 01:00:07.377774 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377783 | orchestrator | 2025-09-04 01:00:07.377793 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-04 01:00:07.377802 | orchestrator | Thursday 04 September 2025 00:59:58 +0000 (0:00:00.291) 0:02:36.348 **** 2025-09-04 01:00:07.377812 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377821 | orchestrator | 2025-09-04 01:00:07.377831 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-04 01:00:07.377840 | orchestrator | Thursday 04 September 2025 00:59:59 +0000 (0:00:00.194) 0:02:36.543 **** 2025-09-04 01:00:07.377850 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377865 | orchestrator | 2025-09-04 01:00:07.377874 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-04 01:00:07.377884 | orchestrator | Thursday 04 September 2025 01:00:00 +0000 (0:00:01.071) 0:02:37.614 **** 2025-09-04 01:00:07.377893 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:00:07.377903 | orchestrator | 2025-09-04 01:00:07.377913 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-04 01:00:07.377922 | orchestrator | Thursday 04 September 2025 01:00:03 +0000 (0:00:03.642) 0:02:41.257 **** 2025-09-04 01:00:07.377932 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:00:07.377941 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:00:07.377951 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:00:07.377960 | orchestrator | 2025-09-04 01:00:07.377969 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:00:07.377979 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-04 01:00:07.377989 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-04 01:00:07.377999 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-04 01:00:07.378009 | orchestrator | 2025-09-04 01:00:07.378042 | orchestrator | 2025-09-04 01:00:07.378067 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:00:07.378076 | orchestrator | Thursday 04 September 2025 01:00:04 +0000 (0:00:00.517) 0:02:41.774 **** 2025-09-04 01:00:07.378086 | orchestrator | =============================================================================== 2025-09-04 01:00:07.378096 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.04s 2025-09-04 01:00:07.378105 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.83s 2025-09-04 01:00:07.378115 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.44s 2025-09-04 01:00:07.378124 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.47s 2025-09-04 01:00:07.378134 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.31s 2025-09-04 01:00:07.378149 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.91s 2025-09-04 01:00:07.378159 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.53s 2025-09-04 01:00:07.378169 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.26s 2025-09-04 01:00:07.378178 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.24s 2025-09-04 01:00:07.378188 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.32s 2025-09-04 01:00:07.378201 | orchestrator | keystone : Creating default user role ----------------------------------- 3.64s 2025-09-04 01:00:07.378211 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.38s 2025-09-04 01:00:07.378221 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.13s 2025-09-04 01:00:07.378230 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.86s 2025-09-04 01:00:07.378240 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.38s 2025-09-04 01:00:07.378249 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2025-09-04 01:00:07.378259 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.25s 2025-09-04 01:00:07.378268 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.17s 2025-09-04 01:00:07.378278 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.84s 2025-09-04 01:00:07.378287 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2025-09-04 01:00:07.378297 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:07.378312 | orchestrator | 2025-09-04 01:00:07 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:07.378321 | orchestrator | 2025-09-04 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:10.400812 | orchestrator | 2025-09-04 01:00:10 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:10.401723 | orchestrator | 2025-09-04 01:00:10 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:10.405208 | orchestrator | 2025-09-04 01:00:10 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:10.407369 | orchestrator | 2025-09-04 01:00:10 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:10.408974 | orchestrator | 2025-09-04 01:00:10 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:10.408996 | orchestrator | 2025-09-04 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:13.447217 | orchestrator | 2025-09-04 01:00:13 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:13.449469 | orchestrator | 2025-09-04 01:00:13 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:13.451012 | orchestrator | 2025-09-04 01:00:13 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:13.454290 | orchestrator | 2025-09-04 01:00:13 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:13.455292 | orchestrator | 2025-09-04 01:00:13 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:13.455564 | orchestrator | 2025-09-04 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:16.496789 | orchestrator | 2025-09-04 01:00:16 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:16.498302 | orchestrator | 2025-09-04 01:00:16 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:16.499921 | orchestrator | 2025-09-04 01:00:16 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:16.502137 | orchestrator | 2025-09-04 01:00:16 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:16.503823 | orchestrator | 2025-09-04 01:00:16 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:16.504184 | orchestrator | 2025-09-04 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:19.542788 | orchestrator | 2025-09-04 01:00:19 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:19.542870 | orchestrator | 2025-09-04 01:00:19 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:19.543315 | orchestrator | 2025-09-04 01:00:19 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:19.544067 | orchestrator | 2025-09-04 01:00:19 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:19.546407 | orchestrator | 2025-09-04 01:00:19 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:19.546451 | orchestrator | 2025-09-04 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:22.585710 | orchestrator | 2025-09-04 01:00:22 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:22.586337 | orchestrator | 2025-09-04 01:00:22 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:22.587360 | orchestrator | 2025-09-04 01:00:22 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:22.587889 | orchestrator | 2025-09-04 01:00:22 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:22.589988 | orchestrator | 2025-09-04 01:00:22 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:22.590009 | orchestrator | 2025-09-04 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:25.638711 | orchestrator | 2025-09-04 01:00:25 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:25.638835 | orchestrator | 2025-09-04 01:00:25 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:25.638850 | orchestrator | 2025-09-04 01:00:25 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:25.638861 | orchestrator | 2025-09-04 01:00:25 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:25.638872 | orchestrator | 2025-09-04 01:00:25 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:25.638884 | orchestrator | 2025-09-04 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:28.669217 | orchestrator | 2025-09-04 01:00:28 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:28.669338 | orchestrator | 2025-09-04 01:00:28 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:28.669353 | orchestrator | 2025-09-04 01:00:28 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:28.669379 | orchestrator | 2025-09-04 01:00:28 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:28.670417 | orchestrator | 2025-09-04 01:00:28 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:28.670442 | orchestrator | 2025-09-04 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:31.831436 | orchestrator | 2025-09-04 01:00:31 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:31.831562 | orchestrator | 2025-09-04 01:00:31 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:31.831590 | orchestrator | 2025-09-04 01:00:31 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:31.831609 | orchestrator | 2025-09-04 01:00:31 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:31.831630 | orchestrator | 2025-09-04 01:00:31 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:31.831648 | orchestrator | 2025-09-04 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:34.731843 | orchestrator | 2025-09-04 01:00:34 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:34.732338 | orchestrator | 2025-09-04 01:00:34 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:34.733129 | orchestrator | 2025-09-04 01:00:34 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:34.733906 | orchestrator | 2025-09-04 01:00:34 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:34.735906 | orchestrator | 2025-09-04 01:00:34 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state STARTED 2025-09-04 01:00:34.735930 | orchestrator | 2025-09-04 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:37.765229 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:37.766386 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:37.767973 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:37.769330 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:37.771085 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:37.772163 | orchestrator | 2025-09-04 01:00:37 | INFO  | Task 0516d205-8d75-4cbc-8763-06b32a8ab0bc is in state SUCCESS 2025-09-04 01:00:37.772520 | orchestrator | 2025-09-04 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:40.794874 | orchestrator | 2025-09-04 01:00:40 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:40.795897 | orchestrator | 2025-09-04 01:00:40 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:40.799565 | orchestrator | 2025-09-04 01:00:40 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:40.799597 | orchestrator | 2025-09-04 01:00:40 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:40.799610 | orchestrator | 2025-09-04 01:00:40 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:40.799621 | orchestrator | 2025-09-04 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:43.832912 | orchestrator | 2025-09-04 01:00:43 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:43.833002 | orchestrator | 2025-09-04 01:00:43 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:43.833615 | orchestrator | 2025-09-04 01:00:43 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:43.834218 | orchestrator | 2025-09-04 01:00:43 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:43.836227 | orchestrator | 2025-09-04 01:00:43 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:43.836250 | orchestrator | 2025-09-04 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:46.863243 | orchestrator | 2025-09-04 01:00:46 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:46.865565 | orchestrator | 2025-09-04 01:00:46 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:46.865996 | orchestrator | 2025-09-04 01:00:46 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:46.868297 | orchestrator | 2025-09-04 01:00:46 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:46.869044 | orchestrator | 2025-09-04 01:00:46 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:46.869064 | orchestrator | 2025-09-04 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:49.890378 | orchestrator | 2025-09-04 01:00:49 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:49.891073 | orchestrator | 2025-09-04 01:00:49 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:49.891694 | orchestrator | 2025-09-04 01:00:49 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:49.893358 | orchestrator | 2025-09-04 01:00:49 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:49.894134 | orchestrator | 2025-09-04 01:00:49 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:49.894161 | orchestrator | 2025-09-04 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:52.918593 | orchestrator | 2025-09-04 01:00:52 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:52.918686 | orchestrator | 2025-09-04 01:00:52 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:52.919174 | orchestrator | 2025-09-04 01:00:52 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:52.919777 | orchestrator | 2025-09-04 01:00:52 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:52.920195 | orchestrator | 2025-09-04 01:00:52 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:52.920217 | orchestrator | 2025-09-04 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:55.946554 | orchestrator | 2025-09-04 01:00:55 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:55.946650 | orchestrator | 2025-09-04 01:00:55 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:55.946667 | orchestrator | 2025-09-04 01:00:55 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:55.946690 | orchestrator | 2025-09-04 01:00:55 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:55.947365 | orchestrator | 2025-09-04 01:00:55 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:55.947456 | orchestrator | 2025-09-04 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:00:58.971262 | orchestrator | 2025-09-04 01:00:58 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:00:58.971354 | orchestrator | 2025-09-04 01:00:58 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:00:58.972817 | orchestrator | 2025-09-04 01:00:58 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:00:58.973292 | orchestrator | 2025-09-04 01:00:58 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:00:58.973825 | orchestrator | 2025-09-04 01:00:58 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:00:58.973850 | orchestrator | 2025-09-04 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:02.007727 | orchestrator | 2025-09-04 01:01:02 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:02.007914 | orchestrator | 2025-09-04 01:01:02 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:02.008666 | orchestrator | 2025-09-04 01:01:02 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:01:02.009218 | orchestrator | 2025-09-04 01:01:02 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:02.011055 | orchestrator | 2025-09-04 01:01:02 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:02.011081 | orchestrator | 2025-09-04 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:05.042835 | orchestrator | 2025-09-04 01:01:05 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:05.044733 | orchestrator | 2025-09-04 01:01:05 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:05.045619 | orchestrator | 2025-09-04 01:01:05 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:01:05.046321 | orchestrator | 2025-09-04 01:01:05 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:05.046995 | orchestrator | 2025-09-04 01:01:05 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:05.047096 | orchestrator | 2025-09-04 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:08.126961 | orchestrator | 2025-09-04 01:01:08 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:08.127565 | orchestrator | 2025-09-04 01:01:08 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:08.128288 | orchestrator | 2025-09-04 01:01:08 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:01:08.129232 | orchestrator | 2025-09-04 01:01:08 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:08.133841 | orchestrator | 2025-09-04 01:01:08 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:08.134082 | orchestrator | 2025-09-04 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:11.159738 | orchestrator | 2025-09-04 01:01:11 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:11.159822 | orchestrator | 2025-09-04 01:01:11 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:11.160153 | orchestrator | 2025-09-04 01:01:11 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:01:11.160711 | orchestrator | 2025-09-04 01:01:11 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:11.161450 | orchestrator | 2025-09-04 01:01:11 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:11.161470 | orchestrator | 2025-09-04 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:14.195634 | orchestrator | 2025-09-04 01:01:14 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:14.196069 | orchestrator | 2025-09-04 01:01:14 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:14.196617 | orchestrator | 2025-09-04 01:01:14 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state STARTED 2025-09-04 01:01:14.197128 | orchestrator | 2025-09-04 01:01:14 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:14.200281 | orchestrator | 2025-09-04 01:01:14 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:14.200318 | orchestrator | 2025-09-04 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:17.233601 | orchestrator | 2025-09-04 01:01:17 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:17.234876 | orchestrator | 2025-09-04 01:01:17 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:17.235198 | orchestrator | 2025-09-04 01:01:17 | INFO  | Task 99bc116f-7540-406a-8e7b-19b901e21c35 is in state SUCCESS 2025-09-04 01:01:17.235805 | orchestrator | 2025-09-04 01:01:17.235833 | orchestrator | 2025-09-04 01:01:17.235845 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:01:17.235857 | orchestrator | 2025-09-04 01:01:17.235869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:01:17.235881 | orchestrator | Thursday 04 September 2025 01:00:01 +0000 (0:00:00.242) 0:00:00.242 **** 2025-09-04 01:01:17.235960 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:01:17.235976 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:01:17.235987 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:01:17.235998 | orchestrator | ok: [testbed-manager] 2025-09-04 01:01:17.236009 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:01:17.236063 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:01:17.236084 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:01:17.236112 | orchestrator | 2025-09-04 01:01:17.236131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:01:17.236149 | orchestrator | Thursday 04 September 2025 01:00:01 +0000 (0:00:00.678) 0:00:00.921 **** 2025-09-04 01:01:17.236165 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236184 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236202 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236220 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236238 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236257 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236275 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-04 01:01:17.236293 | orchestrator | 2025-09-04 01:01:17.236308 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-04 01:01:17.236319 | orchestrator | 2025-09-04 01:01:17.236330 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-04 01:01:17.236341 | orchestrator | Thursday 04 September 2025 01:00:02 +0000 (0:00:00.622) 0:00:01.544 **** 2025-09-04 01:01:17.236353 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:01:17.236533 | orchestrator | 2025-09-04 01:01:17.236549 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-04 01:01:17.236563 | orchestrator | Thursday 04 September 2025 01:00:04 +0000 (0:00:02.484) 0:00:04.029 **** 2025-09-04 01:01:17.236576 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-04 01:01:17.236589 | orchestrator | 2025-09-04 01:01:17.236603 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-04 01:01:17.236616 | orchestrator | Thursday 04 September 2025 01:00:08 +0000 (0:00:03.913) 0:00:07.943 **** 2025-09-04 01:01:17.236630 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-04 01:01:17.236644 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-04 01:01:17.236663 | orchestrator | 2025-09-04 01:01:17.236674 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-04 01:01:17.236685 | orchestrator | Thursday 04 September 2025 01:00:14 +0000 (0:00:05.880) 0:00:13.823 **** 2025-09-04 01:01:17.236696 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:01:17.236709 | orchestrator | 2025-09-04 01:01:17.236747 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-04 01:01:17.236766 | orchestrator | Thursday 04 September 2025 01:00:18 +0000 (0:00:03.459) 0:00:17.282 **** 2025-09-04 01:01:17.236785 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:01:17.236804 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-04 01:01:17.236823 | orchestrator | 2025-09-04 01:01:17.236843 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-04 01:01:17.236862 | orchestrator | Thursday 04 September 2025 01:00:22 +0000 (0:00:04.407) 0:00:21.689 **** 2025-09-04 01:01:17.236875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:01:17.236886 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-04 01:01:17.236913 | orchestrator | 2025-09-04 01:01:17.236924 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-04 01:01:17.236935 | orchestrator | Thursday 04 September 2025 01:00:28 +0000 (0:00:06.430) 0:00:28.120 **** 2025-09-04 01:01:17.236946 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-04 01:01:17.236957 | orchestrator | 2025-09-04 01:01:17.236967 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:01:17.236978 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.236990 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237045 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237065 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237077 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237104 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237116 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.237127 | orchestrator | 2025-09-04 01:01:17.237137 | orchestrator | 2025-09-04 01:01:17.237148 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:01:17.237159 | orchestrator | Thursday 04 September 2025 01:00:34 +0000 (0:00:05.676) 0:00:33.796 **** 2025-09-04 01:01:17.237170 | orchestrator | =============================================================================== 2025-09-04 01:01:17.237180 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.43s 2025-09-04 01:01:17.237191 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.88s 2025-09-04 01:01:17.237201 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.68s 2025-09-04 01:01:17.237212 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.41s 2025-09-04 01:01:17.237223 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.91s 2025-09-04 01:01:17.237237 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.46s 2025-09-04 01:01:17.237255 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.48s 2025-09-04 01:01:17.237273 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2025-09-04 01:01:17.237291 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-09-04 01:01:17.237310 | orchestrator | 2025-09-04 01:01:17.237328 | orchestrator | 2025-09-04 01:01:17.237347 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-04 01:01:17.237364 | orchestrator | 2025-09-04 01:01:17.237380 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-04 01:01:17.237391 | orchestrator | Thursday 04 September 2025 00:59:53 +0000 (0:00:00.265) 0:00:00.265 **** 2025-09-04 01:01:17.237402 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237413 | orchestrator | 2025-09-04 01:01:17.237423 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-04 01:01:17.237434 | orchestrator | Thursday 04 September 2025 00:59:55 +0000 (0:00:02.234) 0:00:02.500 **** 2025-09-04 01:01:17.237445 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237455 | orchestrator | 2025-09-04 01:01:17.237466 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-04 01:01:17.237476 | orchestrator | Thursday 04 September 2025 00:59:56 +0000 (0:00:01.057) 0:00:03.558 **** 2025-09-04 01:01:17.237506 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237524 | orchestrator | 2025-09-04 01:01:17.237542 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-04 01:01:17.237561 | orchestrator | Thursday 04 September 2025 00:59:57 +0000 (0:00:01.023) 0:00:04.581 **** 2025-09-04 01:01:17.237580 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237599 | orchestrator | 2025-09-04 01:01:17.237618 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-04 01:01:17.237632 | orchestrator | Thursday 04 September 2025 00:59:59 +0000 (0:00:01.315) 0:00:05.897 **** 2025-09-04 01:01:17.237642 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237653 | orchestrator | 2025-09-04 01:01:17.237664 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-04 01:01:17.237674 | orchestrator | Thursday 04 September 2025 01:00:00 +0000 (0:00:01.125) 0:00:07.022 **** 2025-09-04 01:01:17.237685 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237696 | orchestrator | 2025-09-04 01:01:17.237706 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-04 01:01:17.237717 | orchestrator | Thursday 04 September 2025 01:00:00 +0000 (0:00:00.843) 0:00:07.865 **** 2025-09-04 01:01:17.237728 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237738 | orchestrator | 2025-09-04 01:01:17.237749 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-04 01:01:17.237760 | orchestrator | Thursday 04 September 2025 01:00:02 +0000 (0:00:01.250) 0:00:09.116 **** 2025-09-04 01:01:17.237770 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237781 | orchestrator | 2025-09-04 01:01:17.237792 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-04 01:01:17.237802 | orchestrator | Thursday 04 September 2025 01:00:03 +0000 (0:00:01.253) 0:00:10.370 **** 2025-09-04 01:01:17.237813 | orchestrator | changed: [testbed-manager] 2025-09-04 01:01:17.237824 | orchestrator | 2025-09-04 01:01:17.237835 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-04 01:01:17.237845 | orchestrator | Thursday 04 September 2025 01:00:51 +0000 (0:00:48.360) 0:00:58.731 **** 2025-09-04 01:01:17.237856 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:01:17.237867 | orchestrator | 2025-09-04 01:01:17.237877 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-04 01:01:17.237888 | orchestrator | 2025-09-04 01:01:17.237899 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-04 01:01:17.237910 | orchestrator | Thursday 04 September 2025 01:00:51 +0000 (0:00:00.108) 0:00:58.839 **** 2025-09-04 01:01:17.237921 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:01:17.237931 | orchestrator | 2025-09-04 01:01:17.237942 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-04 01:01:17.237953 | orchestrator | 2025-09-04 01:01:17.237971 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-04 01:01:17.237982 | orchestrator | Thursday 04 September 2025 01:01:03 +0000 (0:00:11.567) 0:01:10.407 **** 2025-09-04 01:01:17.237993 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:01:17.238004 | orchestrator | 2025-09-04 01:01:17.238080 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-04 01:01:17.238096 | orchestrator | 2025-09-04 01:01:17.238107 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-04 01:01:17.238119 | orchestrator | Thursday 04 September 2025 01:01:04 +0000 (0:00:01.209) 0:01:11.617 **** 2025-09-04 01:01:17.238129 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:01:17.238140 | orchestrator | 2025-09-04 01:01:17.238161 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:01:17.238173 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-04 01:01:17.238184 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.238206 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.238243 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:01:17.238254 | orchestrator | 2025-09-04 01:01:17.238265 | orchestrator | 2025-09-04 01:01:17.238275 | orchestrator | 2025-09-04 01:01:17.238286 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:01:17.238297 | orchestrator | Thursday 04 September 2025 01:01:15 +0000 (0:00:11.189) 0:01:22.806 **** 2025-09-04 01:01:17.238308 | orchestrator | =============================================================================== 2025-09-04 01:01:17.238318 | orchestrator | Create admin user ------------------------------------------------------ 48.36s 2025-09-04 01:01:17.238329 | orchestrator | Restart ceph manager service ------------------------------------------- 23.97s 2025-09-04 01:01:17.238340 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.23s 2025-09-04 01:01:17.238352 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.32s 2025-09-04 01:01:17.238371 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.25s 2025-09-04 01:01:17.238390 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.25s 2025-09-04 01:01:17.238410 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.13s 2025-09-04 01:01:17.238428 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2025-09-04 01:01:17.238447 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.02s 2025-09-04 01:01:17.238465 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.84s 2025-09-04 01:01:17.238483 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2025-09-04 01:01:17.238504 | orchestrator | 2025-09-04 01:01:17 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:17.238526 | orchestrator | 2025-09-04 01:01:17 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:17.238547 | orchestrator | 2025-09-04 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:20.262433 | orchestrator | 2025-09-04 01:01:20 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:20.262541 | orchestrator | 2025-09-04 01:01:20 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:20.263155 | orchestrator | 2025-09-04 01:01:20 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:20.263641 | orchestrator | 2025-09-04 01:01:20 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:20.263661 | orchestrator | 2025-09-04 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:23.285522 | orchestrator | 2025-09-04 01:01:23 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:23.286191 | orchestrator | 2025-09-04 01:01:23 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:23.287281 | orchestrator | 2025-09-04 01:01:23 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:23.289213 | orchestrator | 2025-09-04 01:01:23 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:23.289236 | orchestrator | 2025-09-04 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:26.323634 | orchestrator | 2025-09-04 01:01:26 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:26.323854 | orchestrator | 2025-09-04 01:01:26 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:26.323903 | orchestrator | 2025-09-04 01:01:26 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:26.324596 | orchestrator | 2025-09-04 01:01:26 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:26.324620 | orchestrator | 2025-09-04 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:29.355426 | orchestrator | 2025-09-04 01:01:29 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:29.355728 | orchestrator | 2025-09-04 01:01:29 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:29.356414 | orchestrator | 2025-09-04 01:01:29 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:29.357161 | orchestrator | 2025-09-04 01:01:29 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:29.357176 | orchestrator | 2025-09-04 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:32.382825 | orchestrator | 2025-09-04 01:01:32 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:32.382929 | orchestrator | 2025-09-04 01:01:32 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:32.383364 | orchestrator | 2025-09-04 01:01:32 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:32.383844 | orchestrator | 2025-09-04 01:01:32 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:32.383944 | orchestrator | 2025-09-04 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:35.412142 | orchestrator | 2025-09-04 01:01:35 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:35.412970 | orchestrator | 2025-09-04 01:01:35 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:35.417808 | orchestrator | 2025-09-04 01:01:35 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:35.421139 | orchestrator | 2025-09-04 01:01:35 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:35.421159 | orchestrator | 2025-09-04 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:38.467541 | orchestrator | 2025-09-04 01:01:38 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:38.468170 | orchestrator | 2025-09-04 01:01:38 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:38.469104 | orchestrator | 2025-09-04 01:01:38 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:38.470074 | orchestrator | 2025-09-04 01:01:38 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:38.470106 | orchestrator | 2025-09-04 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:41.515160 | orchestrator | 2025-09-04 01:01:41 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:41.519813 | orchestrator | 2025-09-04 01:01:41 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:41.522587 | orchestrator | 2025-09-04 01:01:41 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:41.525796 | orchestrator | 2025-09-04 01:01:41 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:41.526440 | orchestrator | 2025-09-04 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:44.563723 | orchestrator | 2025-09-04 01:01:44 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:44.565826 | orchestrator | 2025-09-04 01:01:44 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:44.568734 | orchestrator | 2025-09-04 01:01:44 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:44.570220 | orchestrator | 2025-09-04 01:01:44 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:44.570414 | orchestrator | 2025-09-04 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:47.614187 | orchestrator | 2025-09-04 01:01:47 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:47.616959 | orchestrator | 2025-09-04 01:01:47 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:47.619253 | orchestrator | 2025-09-04 01:01:47 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:47.621108 | orchestrator | 2025-09-04 01:01:47 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:47.621459 | orchestrator | 2025-09-04 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:50.673163 | orchestrator | 2025-09-04 01:01:50 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:50.678361 | orchestrator | 2025-09-04 01:01:50 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:50.679661 | orchestrator | 2025-09-04 01:01:50 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:50.681145 | orchestrator | 2025-09-04 01:01:50 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:50.681237 | orchestrator | 2025-09-04 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:53.726606 | orchestrator | 2025-09-04 01:01:53 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:53.729075 | orchestrator | 2025-09-04 01:01:53 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:53.730742 | orchestrator | 2025-09-04 01:01:53 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:53.733758 | orchestrator | 2025-09-04 01:01:53 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:53.733783 | orchestrator | 2025-09-04 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:56.784305 | orchestrator | 2025-09-04 01:01:56 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:56.784457 | orchestrator | 2025-09-04 01:01:56 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:56.786115 | orchestrator | 2025-09-04 01:01:56 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:56.787606 | orchestrator | 2025-09-04 01:01:56 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:56.788113 | orchestrator | 2025-09-04 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:01:59.828866 | orchestrator | 2025-09-04 01:01:59 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:01:59.828955 | orchestrator | 2025-09-04 01:01:59 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:01:59.831191 | orchestrator | 2025-09-04 01:01:59 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:01:59.832018 | orchestrator | 2025-09-04 01:01:59 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:01:59.832213 | orchestrator | 2025-09-04 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:02.874853 | orchestrator | 2025-09-04 01:02:02 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:02.875599 | orchestrator | 2025-09-04 01:02:02 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:02.876683 | orchestrator | 2025-09-04 01:02:02 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:02.877620 | orchestrator | 2025-09-04 01:02:02 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:02.877734 | orchestrator | 2025-09-04 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:05.914083 | orchestrator | 2025-09-04 01:02:05 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:05.915621 | orchestrator | 2025-09-04 01:02:05 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:05.916451 | orchestrator | 2025-09-04 01:02:05 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:05.917501 | orchestrator | 2025-09-04 01:02:05 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:05.917523 | orchestrator | 2025-09-04 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:08.958687 | orchestrator | 2025-09-04 01:02:08 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:08.960563 | orchestrator | 2025-09-04 01:02:08 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:08.963408 | orchestrator | 2025-09-04 01:02:08 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:08.965605 | orchestrator | 2025-09-04 01:02:08 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:08.966183 | orchestrator | 2025-09-04 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:12.010103 | orchestrator | 2025-09-04 01:02:12 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:12.013042 | orchestrator | 2025-09-04 01:02:12 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:12.014512 | orchestrator | 2025-09-04 01:02:12 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:12.017649 | orchestrator | 2025-09-04 01:02:12 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:12.017683 | orchestrator | 2025-09-04 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:15.064883 | orchestrator | 2025-09-04 01:02:15 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:15.067977 | orchestrator | 2025-09-04 01:02:15 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:15.069201 | orchestrator | 2025-09-04 01:02:15 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:15.071353 | orchestrator | 2025-09-04 01:02:15 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:15.071574 | orchestrator | 2025-09-04 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:18.102262 | orchestrator | 2025-09-04 01:02:18 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:18.102774 | orchestrator | 2025-09-04 01:02:18 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:18.103753 | orchestrator | 2025-09-04 01:02:18 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:18.104581 | orchestrator | 2025-09-04 01:02:18 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:18.104908 | orchestrator | 2025-09-04 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:21.141558 | orchestrator | 2025-09-04 01:02:21 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:21.142641 | orchestrator | 2025-09-04 01:02:21 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:21.143430 | orchestrator | 2025-09-04 01:02:21 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:21.144219 | orchestrator | 2025-09-04 01:02:21 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:21.144471 | orchestrator | 2025-09-04 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:24.167926 | orchestrator | 2025-09-04 01:02:24 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:24.168065 | orchestrator | 2025-09-04 01:02:24 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:24.168468 | orchestrator | 2025-09-04 01:02:24 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:24.169779 | orchestrator | 2025-09-04 01:02:24 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:24.169802 | orchestrator | 2025-09-04 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:27.198779 | orchestrator | 2025-09-04 01:02:27 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:27.198962 | orchestrator | 2025-09-04 01:02:27 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:27.199449 | orchestrator | 2025-09-04 01:02:27 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:27.199828 | orchestrator | 2025-09-04 01:02:27 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:27.199939 | orchestrator | 2025-09-04 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:30.259578 | orchestrator | 2025-09-04 01:02:30 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:30.261558 | orchestrator | 2025-09-04 01:02:30 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:30.262673 | orchestrator | 2025-09-04 01:02:30 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:30.263670 | orchestrator | 2025-09-04 01:02:30 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:30.263710 | orchestrator | 2025-09-04 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:33.296263 | orchestrator | 2025-09-04 01:02:33 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:33.297601 | orchestrator | 2025-09-04 01:02:33 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:33.297631 | orchestrator | 2025-09-04 01:02:33 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:33.298482 | orchestrator | 2025-09-04 01:02:33 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:33.298506 | orchestrator | 2025-09-04 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:36.323741 | orchestrator | 2025-09-04 01:02:36 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:36.323945 | orchestrator | 2025-09-04 01:02:36 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:36.325200 | orchestrator | 2025-09-04 01:02:36 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:36.326898 | orchestrator | 2025-09-04 01:02:36 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:36.326926 | orchestrator | 2025-09-04 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:39.383079 | orchestrator | 2025-09-04 01:02:39 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:39.385231 | orchestrator | 2025-09-04 01:02:39 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:39.387354 | orchestrator | 2025-09-04 01:02:39 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:39.389635 | orchestrator | 2025-09-04 01:02:39 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:39.389662 | orchestrator | 2025-09-04 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:42.436515 | orchestrator | 2025-09-04 01:02:42 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:42.439389 | orchestrator | 2025-09-04 01:02:42 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:42.440638 | orchestrator | 2025-09-04 01:02:42 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:42.442098 | orchestrator | 2025-09-04 01:02:42 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:42.442125 | orchestrator | 2025-09-04 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:45.482720 | orchestrator | 2025-09-04 01:02:45 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:45.486951 | orchestrator | 2025-09-04 01:02:45 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:45.490312 | orchestrator | 2025-09-04 01:02:45 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:45.492907 | orchestrator | 2025-09-04 01:02:45 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:45.493038 | orchestrator | 2025-09-04 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:48.534457 | orchestrator | 2025-09-04 01:02:48 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:48.535250 | orchestrator | 2025-09-04 01:02:48 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:48.536786 | orchestrator | 2025-09-04 01:02:48 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:48.537951 | orchestrator | 2025-09-04 01:02:48 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:48.538128 | orchestrator | 2025-09-04 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:51.574799 | orchestrator | 2025-09-04 01:02:51 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:51.576572 | orchestrator | 2025-09-04 01:02:51 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:51.578733 | orchestrator | 2025-09-04 01:02:51 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state STARTED 2025-09-04 01:02:51.581056 | orchestrator | 2025-09-04 01:02:51 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:51.581081 | orchestrator | 2025-09-04 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:54.626881 | orchestrator | 2025-09-04 01:02:54 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:54.627064 | orchestrator | 2025-09-04 01:02:54 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:54.627650 | orchestrator | 2025-09-04 01:02:54 | INFO  | Task 932416c6-7d83-4528-b625-74aaf5ddd0cc is in state SUCCESS 2025-09-04 01:02:54.629133 | orchestrator | 2025-09-04 01:02:54.629178 | orchestrator | 2025-09-04 01:02:54.629189 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:02:54.629199 | orchestrator | 2025-09-04 01:02:54.629209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:02:54.629219 | orchestrator | Thursday 04 September 2025 01:00:01 +0000 (0:00:00.267) 0:00:00.267 **** 2025-09-04 01:02:54.629229 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:02:54.629240 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:02:54.629252 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:02:54.629269 | orchestrator | 2025-09-04 01:02:54.629285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:02:54.629301 | orchestrator | Thursday 04 September 2025 01:00:01 +0000 (0:00:00.274) 0:00:00.541 **** 2025-09-04 01:02:54.629316 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-04 01:02:54.629333 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-04 01:02:54.629355 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-04 01:02:54.629441 | orchestrator | 2025-09-04 01:02:54.629460 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-04 01:02:54.629476 | orchestrator | 2025-09-04 01:02:54.629492 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-04 01:02:54.629509 | orchestrator | Thursday 04 September 2025 01:00:02 +0000 (0:00:00.425) 0:00:00.967 **** 2025-09-04 01:02:54.629528 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:02:54.629545 | orchestrator | 2025-09-04 01:02:54.629561 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-04 01:02:54.629577 | orchestrator | Thursday 04 September 2025 01:00:03 +0000 (0:00:01.333) 0:00:02.300 **** 2025-09-04 01:02:54.629595 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-04 01:02:54.629614 | orchestrator | 2025-09-04 01:02:54.629631 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-04 01:02:54.629649 | orchestrator | Thursday 04 September 2025 01:00:07 +0000 (0:00:03.840) 0:00:06.141 **** 2025-09-04 01:02:54.629666 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-04 01:02:54.629684 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-04 01:02:54.629703 | orchestrator | 2025-09-04 01:02:54.629721 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-04 01:02:54.629739 | orchestrator | Thursday 04 September 2025 01:00:13 +0000 (0:00:06.208) 0:00:12.350 **** 2025-09-04 01:02:54.629757 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-04 01:02:54.629775 | orchestrator | 2025-09-04 01:02:54.629791 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-04 01:02:54.629808 | orchestrator | Thursday 04 September 2025 01:00:17 +0000 (0:00:03.764) 0:00:16.114 **** 2025-09-04 01:02:54.629830 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:02:54.629853 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-04 01:02:54.629871 | orchestrator | 2025-09-04 01:02:54.629888 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-04 01:02:54.629933 | orchestrator | Thursday 04 September 2025 01:00:21 +0000 (0:00:04.368) 0:00:20.483 **** 2025-09-04 01:02:54.629952 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:02:54.629995 | orchestrator | 2025-09-04 01:02:54.630078 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-04 01:02:54.630146 | orchestrator | Thursday 04 September 2025 01:00:25 +0000 (0:00:03.419) 0:00:23.902 **** 2025-09-04 01:02:54.630165 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-04 01:02:54.630180 | orchestrator | 2025-09-04 01:02:54.630197 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-04 01:02:54.630213 | orchestrator | Thursday 04 September 2025 01:00:29 +0000 (0:00:04.584) 0:00:28.486 **** 2025-09-04 01:02:54.630274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.630301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.630342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.630360 | orchestrator | 2025-09-04 01:02:54.630377 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-04 01:02:54.630394 | orchestrator | Thursday 04 September 2025 01:00:33 +0000 (0:00:03.451) 0:00:31.938 **** 2025-09-04 01:02:54.630411 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:02:54.630430 | orchestrator | 2025-09-04 01:02:54.630456 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-04 01:02:54.630472 | orchestrator | Thursday 04 September 2025 01:00:33 +0000 (0:00:00.585) 0:00:32.523 **** 2025-09-04 01:02:54.630487 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:02:54.630502 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.630518 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:02:54.630533 | orchestrator | 2025-09-04 01:02:54.630548 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-04 01:02:54.630563 | orchestrator | Thursday 04 September 2025 01:00:37 +0000 (0:00:03.362) 0:00:35.886 **** 2025-09-04 01:02:54.630578 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630594 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630626 | orchestrator | 2025-09-04 01:02:54.630641 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-04 01:02:54.630656 | orchestrator | Thursday 04 September 2025 01:00:38 +0000 (0:00:01.534) 0:00:37.420 **** 2025-09-04 01:02:54.630671 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630687 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:02:54.630771 | orchestrator | 2025-09-04 01:02:54.630787 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-04 01:02:54.630804 | orchestrator | Thursday 04 September 2025 01:00:39 +0000 (0:00:01.010) 0:00:38.431 **** 2025-09-04 01:02:54.630819 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:02:54.630836 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:02:54.630850 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:02:54.630865 | orchestrator | 2025-09-04 01:02:54.630881 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-04 01:02:54.630895 | orchestrator | Thursday 04 September 2025 01:00:40 +0000 (0:00:00.604) 0:00:39.036 **** 2025-09-04 01:02:54.630911 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.630925 | orchestrator | 2025-09-04 01:02:54.630941 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-04 01:02:54.630986 | orchestrator | Thursday 04 September 2025 01:00:40 +0000 (0:00:00.292) 0:00:39.328 **** 2025-09-04 01:02:54.631004 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.631028 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.631047 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.631063 | orchestrator | 2025-09-04 01:02:54.631078 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-04 01:02:54.631094 | orchestrator | Thursday 04 September 2025 01:00:40 +0000 (0:00:00.359) 0:00:39.687 **** 2025-09-04 01:02:54.631113 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:02:54.631131 | orchestrator | 2025-09-04 01:02:54.631149 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-04 01:02:54.631165 | orchestrator | Thursday 04 September 2025 01:00:41 +0000 (0:00:00.590) 0:00:40.277 **** 2025-09-04 01:02:54.631212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.631237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.631270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.631289 | orchestrator | 2025-09-04 01:02:54.631306 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-04 01:02:54.631322 | orchestrator | Thursday 04 September 2025 01:00:46 +0000 (0:00:05.302) 0:00:45.579 **** 2025-09-04 01:02:54.631362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631400 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.631424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631441 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.631479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631514 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.631536 | orchestrator | 2025-09-04 01:02:54.631552 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-04 01:02:54.631569 | orchestrator | Thursday 04 September 2025 01:00:52 +0000 (0:00:05.216) 0:00:50.796 **** 2025-09-04 01:02:54.631587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631605 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.631642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631682 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.631699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-04 01:02:54.631718 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.631737 | orchestrator | 2025-09-04 01:02:54.631762 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-04 01:02:54.631780 | orchestrator | Thursday 04 September 2025 01:00:55 +0000 (0:00:03.683) 0:00:54.480 **** 2025-09-04 01:02:54.631802 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.631826 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.631843 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.631860 | orchestrator | 2025-09-04 01:02:54.631876 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-04 01:02:54.631894 | orchestrator | Thursday 04 September 2025 01:00:59 +0000 (0:00:03.419) 0:00:57.899 **** 2025-09-04 01:02:54.631930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632077 | orchestrator | 2025-09-04 01:02:54.632093 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-04 01:02:54.632110 | orchestrator | Thursday 04 September 2025 01:01:03 +0000 (0:00:04.478) 0:01:02.378 **** 2025-09-04 01:02:54.632137 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.632155 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:02:54.632172 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:02:54.632186 | orchestrator | 2025-09-04 01:02:54.632210 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-04 01:02:54.632227 | orchestrator | Thursday 04 September 2025 01:01:11 +0000 (0:00:07.777) 0:01:10.156 **** 2025-09-04 01:02:54.632244 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632261 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632278 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632295 | orchestrator | 2025-09-04 01:02:54.632312 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-04 01:02:54.632341 | orchestrator | Thursday 04 September 2025 01:01:18 +0000 (0:00:06.696) 0:01:16.853 **** 2025-09-04 01:02:54.632359 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632375 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632390 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632406 | orchestrator | 2025-09-04 01:02:54.632419 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-04 01:02:54.632431 | orchestrator | Thursday 04 September 2025 01:01:23 +0000 (0:00:05.247) 0:01:22.100 **** 2025-09-04 01:02:54.632443 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632456 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632468 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632481 | orchestrator | 2025-09-04 01:02:54.632494 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-04 01:02:54.632510 | orchestrator | Thursday 04 September 2025 01:01:28 +0000 (0:00:05.402) 0:01:27.503 **** 2025-09-04 01:02:54.632528 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632540 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632552 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632564 | orchestrator | 2025-09-04 01:02:54.632576 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-04 01:02:54.632589 | orchestrator | Thursday 04 September 2025 01:01:32 +0000 (0:00:03.927) 0:01:31.430 **** 2025-09-04 01:02:54.632602 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632614 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632626 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632640 | orchestrator | 2025-09-04 01:02:54.632651 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-04 01:02:54.632664 | orchestrator | Thursday 04 September 2025 01:01:32 +0000 (0:00:00.230) 0:01:31.661 **** 2025-09-04 01:02:54.632676 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-04 01:02:54.632690 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632702 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-04 01:02:54.632715 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632728 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-04 01:02:54.632741 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632754 | orchestrator | 2025-09-04 01:02:54.632767 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-04 01:02:54.632779 | orchestrator | Thursday 04 September 2025 01:01:35 +0000 (0:00:02.923) 0:01:34.585 **** 2025-09-04 01:02:54.632795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-04 01:02:54.632886 | orchestrator | 2025-09-04 01:02:54.632900 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-04 01:02:54.632913 | orchestrator | Thursday 04 September 2025 01:01:39 +0000 (0:00:04.122) 0:01:38.707 **** 2025-09-04 01:02:54.632927 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:02:54.632940 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:02:54.632953 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:02:54.632994 | orchestrator | 2025-09-04 01:02:54.633008 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-04 01:02:54.633021 | orchestrator | Thursday 04 September 2025 01:01:40 +0000 (0:00:00.296) 0:01:39.004 **** 2025-09-04 01:02:54.633034 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633047 | orchestrator | 2025-09-04 01:02:54.633060 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-04 01:02:54.633073 | orchestrator | Thursday 04 September 2025 01:01:42 +0000 (0:00:02.151) 0:01:41.155 **** 2025-09-04 01:02:54.633086 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633100 | orchestrator | 2025-09-04 01:02:54.633113 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-04 01:02:54.633126 | orchestrator | Thursday 04 September 2025 01:01:44 +0000 (0:00:02.297) 0:01:43.452 **** 2025-09-04 01:02:54.633139 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633152 | orchestrator | 2025-09-04 01:02:54.633170 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-04 01:02:54.633183 | orchestrator | Thursday 04 September 2025 01:01:46 +0000 (0:00:02.171) 0:01:45.624 **** 2025-09-04 01:02:54.633197 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633210 | orchestrator | 2025-09-04 01:02:54.633223 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-04 01:02:54.633236 | orchestrator | Thursday 04 September 2025 01:02:14 +0000 (0:00:27.467) 0:02:13.092 **** 2025-09-04 01:02:54.633249 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633263 | orchestrator | 2025-09-04 01:02:54.633283 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-04 01:02:54.633296 | orchestrator | Thursday 04 September 2025 01:02:16 +0000 (0:00:02.475) 0:02:15.567 **** 2025-09-04 01:02:54.633309 | orchestrator | 2025-09-04 01:02:54.633322 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-04 01:02:54.633335 | orchestrator | Thursday 04 September 2025 01:02:16 +0000 (0:00:00.061) 0:02:15.628 **** 2025-09-04 01:02:54.633348 | orchestrator | 2025-09-04 01:02:54.633361 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-04 01:02:54.633374 | orchestrator | Thursday 04 September 2025 01:02:16 +0000 (0:00:00.065) 0:02:15.694 **** 2025-09-04 01:02:54.633388 | orchestrator | 2025-09-04 01:02:54.633401 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-04 01:02:54.633414 | orchestrator | Thursday 04 September 2025 01:02:16 +0000 (0:00:00.071) 0:02:15.765 **** 2025-09-04 01:02:54.633427 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:02:54.633440 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:02:54.633453 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:02:54.633464 | orchestrator | 2025-09-04 01:02:54.633477 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:02:54.633492 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-04 01:02:54.633515 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:02:54.633530 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:02:54.633543 | orchestrator | 2025-09-04 01:02:54.633557 | orchestrator | 2025-09-04 01:02:54.633570 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:02:54.633583 | orchestrator | Thursday 04 September 2025 01:02:53 +0000 (0:00:36.303) 0:02:52.069 **** 2025-09-04 01:02:54.633595 | orchestrator | =============================================================================== 2025-09-04 01:02:54.633609 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.30s 2025-09-04 01:02:54.633623 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.47s 2025-09-04 01:02:54.633636 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.78s 2025-09-04 01:02:54.633650 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.70s 2025-09-04 01:02:54.633663 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.21s 2025-09-04 01:02:54.633676 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.40s 2025-09-04 01:02:54.633690 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.30s 2025-09-04 01:02:54.633703 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.25s 2025-09-04 01:02:54.633716 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.22s 2025-09-04 01:02:54.633730 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.58s 2025-09-04 01:02:54.633743 | orchestrator | glance : Copying over config.json files for services -------------------- 4.48s 2025-09-04 01:02:54.633756 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.37s 2025-09-04 01:02:54.633769 | orchestrator | glance : Check glance containers ---------------------------------------- 4.12s 2025-09-04 01:02:54.633782 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.93s 2025-09-04 01:02:54.633794 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.84s 2025-09-04 01:02:54.633808 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.76s 2025-09-04 01:02:54.633820 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.68s 2025-09-04 01:02:54.633833 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.45s 2025-09-04 01:02:54.633846 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.42s 2025-09-04 01:02:54.633859 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.42s 2025-09-04 01:02:54.633872 | orchestrator | 2025-09-04 01:02:54 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:54.633885 | orchestrator | 2025-09-04 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:02:57.687507 | orchestrator | 2025-09-04 01:02:57 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:02:57.688274 | orchestrator | 2025-09-04 01:02:57 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:02:57.688948 | orchestrator | 2025-09-04 01:02:57 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:02:57.689908 | orchestrator | 2025-09-04 01:02:57 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:02:57.690212 | orchestrator | 2025-09-04 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:00.734678 | orchestrator | 2025-09-04 01:03:00 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:00.736486 | orchestrator | 2025-09-04 01:03:00 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:00.738838 | orchestrator | 2025-09-04 01:03:00 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:00.740490 | orchestrator | 2025-09-04 01:03:00 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:00.740631 | orchestrator | 2025-09-04 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:03.780105 | orchestrator | 2025-09-04 01:03:03 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:03.780209 | orchestrator | 2025-09-04 01:03:03 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:03.784533 | orchestrator | 2025-09-04 01:03:03 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:03.786153 | orchestrator | 2025-09-04 01:03:03 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:03.786177 | orchestrator | 2025-09-04 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:06.833428 | orchestrator | 2025-09-04 01:03:06 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:06.834379 | orchestrator | 2025-09-04 01:03:06 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:06.835757 | orchestrator | 2025-09-04 01:03:06 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:06.836414 | orchestrator | 2025-09-04 01:03:06 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:06.836436 | orchestrator | 2025-09-04 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:09.879646 | orchestrator | 2025-09-04 01:03:09 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:09.880103 | orchestrator | 2025-09-04 01:03:09 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:09.880681 | orchestrator | 2025-09-04 01:03:09 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:09.883862 | orchestrator | 2025-09-04 01:03:09 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:09.883889 | orchestrator | 2025-09-04 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:12.922867 | orchestrator | 2025-09-04 01:03:12 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:12.925654 | orchestrator | 2025-09-04 01:03:12 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:12.929097 | orchestrator | 2025-09-04 01:03:12 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:12.931312 | orchestrator | 2025-09-04 01:03:12 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:12.931553 | orchestrator | 2025-09-04 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:15.977829 | orchestrator | 2025-09-04 01:03:15 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:15.981543 | orchestrator | 2025-09-04 01:03:15 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:15.984611 | orchestrator | 2025-09-04 01:03:15 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:15.986685 | orchestrator | 2025-09-04 01:03:15 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:15.986738 | orchestrator | 2025-09-04 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:19.037687 | orchestrator | 2025-09-04 01:03:19 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state STARTED 2025-09-04 01:03:19.039111 | orchestrator | 2025-09-04 01:03:19 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:19.040456 | orchestrator | 2025-09-04 01:03:19 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:19.041836 | orchestrator | 2025-09-04 01:03:19 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:19.041861 | orchestrator | 2025-09-04 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:22.087869 | orchestrator | 2025-09-04 01:03:22 | INFO  | Task f4a2c2e6-6770-4573-9ede-6a31381ab46a is in state SUCCESS 2025-09-04 01:03:22.088868 | orchestrator | 2025-09-04 01:03:22.088906 | orchestrator | 2025-09-04 01:03:22.088919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:03:22.088931 | orchestrator | 2025-09-04 01:03:22.089274 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:03:22.089305 | orchestrator | Thursday 04 September 2025 00:59:53 +0000 (0:00:00.283) 0:00:00.283 **** 2025-09-04 01:03:22.089317 | orchestrator | ok: [testbed-manager] 2025-09-04 01:03:22.089330 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:03:22.089341 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:03:22.089352 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:03:22.089363 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:03:22.089374 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:03:22.089384 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:03:22.089396 | orchestrator | 2025-09-04 01:03:22.089408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:03:22.089419 | orchestrator | Thursday 04 September 2025 00:59:54 +0000 (0:00:00.952) 0:00:01.235 **** 2025-09-04 01:03:22.089430 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089442 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089452 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089463 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089477 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089620 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089634 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-04 01:03:22.089648 | orchestrator | 2025-09-04 01:03:22.089661 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-04 01:03:22.089675 | orchestrator | 2025-09-04 01:03:22.089686 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-04 01:03:22.089697 | orchestrator | Thursday 04 September 2025 00:59:55 +0000 (0:00:00.724) 0:00:01.960 **** 2025-09-04 01:03:22.089709 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:03:22.089721 | orchestrator | 2025-09-04 01:03:22.089732 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-04 01:03:22.089743 | orchestrator | Thursday 04 September 2025 00:59:56 +0000 (0:00:01.618) 0:00:03.578 **** 2025-09-04 01:03:22.089758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.089800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 01:03:22.089840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.089889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.089923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.089937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.090209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090233 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.090245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.090374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 01:03:22.090704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090747 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.090817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.090829 | orchestrator | 2025-09-04 01:03:22.090840 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-04 01:03:22.090851 | orchestrator | Thursday 04 September 2025 01:00:00 +0000 (0:00:03.946) 0:00:07.525 **** 2025-09-04 01:03:22.090862 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:03:22.090881 | orchestrator | 2025-09-04 01:03:22.090892 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-04 01:03:22.090902 | orchestrator | Thursday 04 September 2025 01:00:01 +0000 (0:00:01.391) 0:00:08.916 **** 2025-09-04 01:03:22.090914 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 01:03:22.090926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091101 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.091133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091272 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 01:03:22.091285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.091445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.091509 | orchestrator | 2025-09-04 01:03:22.091520 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-04 01:03:22.091531 | orchestrator | Thursday 04 September 2025 01:00:07 +0000 (0:00:06.003) 0:00:14.920 **** 2025-09-04 01:03:22.091558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-04 01:03:22.091570 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091594 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-04 01:03:22.091618 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091672 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.091684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091822 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.091834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091845 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.091861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.091888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091900 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.091911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091934 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.091968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.091980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.091991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092002 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.092014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092069 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.092080 | orchestrator | 2025-09-04 01:03:22.092091 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-04 01:03:22.092102 | orchestrator | Thursday 04 September 2025 01:00:09 +0000 (0:00:01.817) 0:00:16.737 **** 2025-09-04 01:03:22.092113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092177 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.092188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092369 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-04 01:03:22.092395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092441 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-04 01:03:22.092454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092465 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.092477 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.092488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-04 01:03:22.092551 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.092573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092608 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.092619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092660 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.092671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-04 01:03:22.092687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-04 01:03:22.092717 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.092728 | orchestrator | 2025-09-04 01:03:22.092739 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-04 01:03:22.092750 | orchestrator | Thursday 04 September 2025 01:00:12 +0000 (0:00:02.195) 0:00:18.933 **** 2025-09-04 01:03:22.092761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092773 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 01:03:22.092784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.092870 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.092882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.092893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.092911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.092923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.092934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.092975 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.092988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.092999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 01:03:22.093100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.093130 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.093176 | orchestrator | 2025-09-04 01:03:22.093195 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-04 01:03:22.093206 | orchestrator | Thursday 04 September 2025 01:00:17 +0000 (0:00:05.911) 0:00:24.844 **** 2025-09-04 01:03:22.093217 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 01:03:22.093228 | orchestrator | 2025-09-04 01:03:22.093239 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-04 01:03:22.093256 | orchestrator | Thursday 04 September 2025 01:00:18 +0000 (0:00:01.048) 0:00:25.893 **** 2025-09-04 01:03:22.093268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093280 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093298 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093310 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093322 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093355 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093367 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093378 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093396 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093418 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093430 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.093451 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093463 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094104, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093492 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093504 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093527 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093563 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093582 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093593 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093605 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093628 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093644 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093756 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093777 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093788 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093800 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093811 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093822 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093839 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093857 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093875 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094127, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.093886 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093898 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093909 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093920 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093936 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.093993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094005 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094070 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094086 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094098 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094114 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094169 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094181 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094192 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094204 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094220 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094259 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094100, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.094270 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094282 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094293 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094305 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094327 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094345 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094356 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094367 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094390 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094402 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094420 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094454 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094477 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094489 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094517 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094119, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6352937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.094562 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094573 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094584 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094596 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094613 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094635 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094647 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094659 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094670 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094681 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.094693 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094723 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094744 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094779 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094790 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094808 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094819 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.094830 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094842 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.094863 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094875 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094097, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.094886 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094898 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094909 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094930 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.094992 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095003 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.095015 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095038 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095056 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095067 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.095079 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095090 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095111 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094105, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.631193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095123 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-04 01:03:22.095134 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.095145 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094118, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6349788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095156 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094107, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6312718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095174 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094102, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.630272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095185 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094125, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6369796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095197 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094094, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.628722, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095217 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094138, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.640272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095230 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094123, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.636272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095241 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094099, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.629272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095258 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094095, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6290092, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095270 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094110, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6342719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095281 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094109, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6316872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095292 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094136, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.639272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-04 01:03:22.095303 | orchestrator | 2025-09-04 01:03:22.095314 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-04 01:03:22.095330 | orchestrator | Thursday 04 September 2025 01:00:46 +0000 (0:00:27.478) 0:00:53.371 **** 2025-09-04 01:03:22.095342 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 01:03:22.095353 | orchestrator | 2025-09-04 01:03:22.095364 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-04 01:03:22.095375 | orchestrator | Thursday 04 September 2025 01:00:47 +0000 (0:00:00.968) 0:00:54.339 **** 2025-09-04 01:03:22.095386 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095413 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095424 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095435 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095446 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 01:03:22.095456 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095468 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095478 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095489 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095499 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095511 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:03:22.095527 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095549 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095570 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095581 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095592 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095602 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095613 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095624 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095634 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095645 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095656 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095677 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095687 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095709 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095719 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095730 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095741 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.095752 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095762 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-04 01:03:22.095773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-04 01:03:22.095784 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-04 01:03:22.095794 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-04 01:03:22.095805 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-04 01:03:22.095816 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-04 01:03:22.095827 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 01:03:22.095837 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-04 01:03:22.095848 | orchestrator | 2025-09-04 01:03:22.095858 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-04 01:03:22.095869 | orchestrator | Thursday 04 September 2025 01:00:50 +0000 (0:00:02.855) 0:00:57.195 **** 2025-09-04 01:03:22.095880 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.095891 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.095902 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.095913 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.095924 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.095934 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.095962 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.095974 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.095984 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.095995 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.096006 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-04 01:03:22.096022 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.096033 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-04 01:03:22.096044 | orchestrator | 2025-09-04 01:03:22.096055 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-04 01:03:22.096066 | orchestrator | Thursday 04 September 2025 01:01:10 +0000 (0:00:19.807) 0:01:17.003 **** 2025-09-04 01:03:22.096081 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096092 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.096103 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096119 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.096130 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096141 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.096152 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096163 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.096173 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096184 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.096195 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-04 01:03:22.096206 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.096217 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-04 01:03:22.096228 | orchestrator | 2025-09-04 01:03:22.096239 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-04 01:03:22.096249 | orchestrator | Thursday 04 September 2025 01:01:14 +0000 (0:00:04.074) 0:01:21.077 **** 2025-09-04 01:03:22.096261 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096273 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096284 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-04 01:03:22.096295 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096306 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096317 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.096328 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.096338 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.096349 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.096360 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096371 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.096382 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-04 01:03:22.096393 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.096404 | orchestrator | 2025-09-04 01:03:22.096415 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-04 01:03:22.096426 | orchestrator | Thursday 04 September 2025 01:01:17 +0000 (0:00:03.473) 0:01:24.551 **** 2025-09-04 01:03:22.096436 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 01:03:22.096471 | orchestrator | 2025-09-04 01:03:22.096482 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-04 01:03:22.096500 | orchestrator | Thursday 04 September 2025 01:01:18 +0000 (0:00:00.987) 0:01:25.538 **** 2025-09-04 01:03:22.096511 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.096521 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.096532 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.096543 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.096553 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.096564 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.096575 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.096585 | orchestrator | 2025-09-04 01:03:22.096596 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-04 01:03:22.096607 | orchestrator | Thursday 04 September 2025 01:01:19 +0000 (0:00:01.261) 0:01:26.799 **** 2025-09-04 01:03:22.096617 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.096628 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.096639 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.096649 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.096660 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.096671 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.096681 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.096692 | orchestrator | 2025-09-04 01:03:22.096703 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-04 01:03:22.096713 | orchestrator | Thursday 04 September 2025 01:01:23 +0000 (0:00:03.594) 0:01:30.394 **** 2025-09-04 01:03:22.096724 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096735 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096746 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096756 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.096767 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.096777 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.096788 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096803 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.096814 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096825 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.096836 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.096999 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.097018 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-04 01:03:22.097029 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.097040 | orchestrator | 2025-09-04 01:03:22.097050 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-04 01:03:22.097061 | orchestrator | Thursday 04 September 2025 01:01:26 +0000 (0:00:02.908) 0:01:33.303 **** 2025-09-04 01:03:22.097072 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097083 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097094 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.097105 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.097116 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097126 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.097137 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-04 01:03:22.097148 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097167 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.097178 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097189 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.097200 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-04 01:03:22.097211 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.097221 | orchestrator | 2025-09-04 01:03:22.097232 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-04 01:03:22.097243 | orchestrator | Thursday 04 September 2025 01:01:28 +0000 (0:00:02.030) 0:01:35.334 **** 2025-09-04 01:03:22.097253 | orchestrator | [WARNING]: Skipped 2025-09-04 01:03:22.097264 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-04 01:03:22.097275 | orchestrator | due to this access issue: 2025-09-04 01:03:22.097285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-04 01:03:22.097296 | orchestrator | not a directory 2025-09-04 01:03:22.097307 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-04 01:03:22.097317 | orchestrator | 2025-09-04 01:03:22.097328 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-04 01:03:22.097339 | orchestrator | Thursday 04 September 2025 01:01:29 +0000 (0:00:00.871) 0:01:36.205 **** 2025-09-04 01:03:22.097350 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.097360 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.097371 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.097381 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.097392 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.097403 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.097414 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.097424 | orchestrator | 2025-09-04 01:03:22.097435 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-04 01:03:22.097446 | orchestrator | Thursday 04 September 2025 01:01:30 +0000 (0:00:00.958) 0:01:37.164 **** 2025-09-04 01:03:22.097456 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.097467 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:22.097478 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:22.097488 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:22.097499 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:22.097509 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:22.097520 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:22.097530 | orchestrator | 2025-09-04 01:03:22.097541 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-04 01:03:22.097552 | orchestrator | Thursday 04 September 2025 01:01:31 +0000 (0:00:00.925) 0:01:38.089 **** 2025-09-04 01:03:22.097564 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-04 01:03:22.097590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097684 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-04 01:03:22.097698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097821 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-04 01:03:22.097841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097864 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-04 01:03:22.097979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.097991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.098002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-04 01:03:22.098013 | orchestrator | 2025-09-04 01:03:22.098056 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-04 01:03:22.098068 | orchestrator | Thursday 04 September 2025 01:01:35 +0000 (0:00:04.398) 0:01:42.488 **** 2025-09-04 01:03:22.098079 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-04 01:03:22.098090 | orchestrator | skipping: [testbed-manager] 2025-09-04 01:03:22.098101 | orchestrator | 2025-09-04 01:03:22.098112 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098122 | orchestrator | Thursday 04 September 2025 01:01:36 +0000 (0:00:01.084) 0:01:43.572 **** 2025-09-04 01:03:22.098133 | orchestrator | 2025-09-04 01:03:22.098144 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098155 | orchestrator | Thursday 04 September 2025 01:01:36 +0000 (0:00:00.057) 0:01:43.630 **** 2025-09-04 01:03:22.098165 | orchestrator | 2025-09-04 01:03:22.098176 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098194 | orchestrator | Thursday 04 September 2025 01:01:36 +0000 (0:00:00.059) 0:01:43.690 **** 2025-09-04 01:03:22.098205 | orchestrator | 2025-09-04 01:03:22.098216 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098226 | orchestrator | Thursday 04 September 2025 01:01:36 +0000 (0:00:00.058) 0:01:43.748 **** 2025-09-04 01:03:22.098237 | orchestrator | 2025-09-04 01:03:22.098247 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098258 | orchestrator | Thursday 04 September 2025 01:01:36 +0000 (0:00:00.160) 0:01:43.908 **** 2025-09-04 01:03:22.098269 | orchestrator | 2025-09-04 01:03:22.098279 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098290 | orchestrator | Thursday 04 September 2025 01:01:37 +0000 (0:00:00.065) 0:01:43.974 **** 2025-09-04 01:03:22.098300 | orchestrator | 2025-09-04 01:03:22.098311 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-04 01:03:22.098322 | orchestrator | Thursday 04 September 2025 01:01:37 +0000 (0:00:00.065) 0:01:44.039 **** 2025-09-04 01:03:22.098332 | orchestrator | 2025-09-04 01:03:22.098343 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-04 01:03:22.098354 | orchestrator | Thursday 04 September 2025 01:01:37 +0000 (0:00:00.109) 0:01:44.148 **** 2025-09-04 01:03:22.098364 | orchestrator | changed: [testbed-manager] 2025-09-04 01:03:22.098375 | orchestrator | 2025-09-04 01:03:22.098390 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-04 01:03:22.098401 | orchestrator | Thursday 04 September 2025 01:01:57 +0000 (0:00:20.324) 0:02:04.472 **** 2025-09-04 01:03:22.098412 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:22.098423 | orchestrator | changed: [testbed-manager] 2025-09-04 01:03:22.098433 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:22.098450 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.098461 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.098471 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.098482 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:22.098493 | orchestrator | 2025-09-04 01:03:22.098503 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-04 01:03:22.098515 | orchestrator | Thursday 04 September 2025 01:02:11 +0000 (0:00:14.423) 0:02:18.896 **** 2025-09-04 01:03:22.098525 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.098536 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.098546 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.098557 | orchestrator | 2025-09-04 01:03:22.098568 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-04 01:03:22.098578 | orchestrator | Thursday 04 September 2025 01:02:17 +0000 (0:00:05.784) 0:02:24.681 **** 2025-09-04 01:03:22.098589 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.098600 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.098610 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.098620 | orchestrator | 2025-09-04 01:03:22.098631 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-04 01:03:22.098642 | orchestrator | Thursday 04 September 2025 01:02:25 +0000 (0:00:07.669) 0:02:32.351 **** 2025-09-04 01:03:22.098653 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:22.098663 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.098674 | orchestrator | changed: [testbed-manager] 2025-09-04 01:03:22.098684 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.098695 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:22.098705 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.098716 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:22.098726 | orchestrator | 2025-09-04 01:03:22.098737 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-04 01:03:22.098748 | orchestrator | Thursday 04 September 2025 01:02:40 +0000 (0:00:14.655) 0:02:47.006 **** 2025-09-04 01:03:22.098758 | orchestrator | changed: [testbed-manager] 2025-09-04 01:03:22.098776 | orchestrator | 2025-09-04 01:03:22.098787 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-04 01:03:22.098798 | orchestrator | Thursday 04 September 2025 01:02:54 +0000 (0:00:14.526) 0:03:01.533 **** 2025-09-04 01:03:22.098808 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:22.098819 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:22.098830 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:22.098840 | orchestrator | 2025-09-04 01:03:22.098851 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-04 01:03:22.098861 | orchestrator | Thursday 04 September 2025 01:03:04 +0000 (0:00:10.091) 0:03:11.625 **** 2025-09-04 01:03:22.098872 | orchestrator | changed: [testbed-manager] 2025-09-04 01:03:22.098883 | orchestrator | 2025-09-04 01:03:22.098894 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-04 01:03:22.098904 | orchestrator | Thursday 04 September 2025 01:03:15 +0000 (0:00:10.375) 0:03:22.001 **** 2025-09-04 01:03:22.098915 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:22.098926 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:22.098936 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:22.098966 | orchestrator | 2025-09-04 01:03:22.098977 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:03:22.098987 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-04 01:03:22.098999 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:03:22.099011 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:03:22.099021 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:03:22.099032 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-04 01:03:22.099043 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-04 01:03:22.099054 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-04 01:03:22.099065 | orchestrator | 2025-09-04 01:03:22.099076 | orchestrator | 2025-09-04 01:03:22.099087 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:03:22.099097 | orchestrator | Thursday 04 September 2025 01:03:21 +0000 (0:00:06.152) 0:03:28.153 **** 2025-09-04 01:03:22.099108 | orchestrator | =============================================================================== 2025-09-04 01:03:22.099119 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.48s 2025-09-04 01:03:22.099130 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.32s 2025-09-04 01:03:22.099140 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.81s 2025-09-04 01:03:22.099156 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.66s 2025-09-04 01:03:22.099167 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 14.53s 2025-09-04 01:03:22.099178 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.42s 2025-09-04 01:03:22.099188 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.38s 2025-09-04 01:03:22.099204 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.09s 2025-09-04 01:03:22.099215 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.67s 2025-09-04 01:03:22.099233 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.15s 2025-09-04 01:03:22.099244 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.00s 2025-09-04 01:03:22.099254 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.91s 2025-09-04 01:03:22.099265 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.79s 2025-09-04 01:03:22.099276 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.40s 2025-09-04 01:03:22.099286 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.07s 2025-09-04 01:03:22.099297 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.95s 2025-09-04 01:03:22.099308 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.59s 2025-09-04 01:03:22.099318 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.47s 2025-09-04 01:03:22.099329 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.91s 2025-09-04 01:03:22.099339 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.86s 2025-09-04 01:03:22.099350 | orchestrator | 2025-09-04 01:03:22 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:22.099361 | orchestrator | 2025-09-04 01:03:22 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:22.099371 | orchestrator | 2025-09-04 01:03:22 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:22.099382 | orchestrator | 2025-09-04 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:25.141890 | orchestrator | 2025-09-04 01:03:25 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:25.143599 | orchestrator | 2025-09-04 01:03:25 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:25.145316 | orchestrator | 2025-09-04 01:03:25 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:25.147115 | orchestrator | 2025-09-04 01:03:25 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:25.147166 | orchestrator | 2025-09-04 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:28.192681 | orchestrator | 2025-09-04 01:03:28 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:28.195732 | orchestrator | 2025-09-04 01:03:28 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:28.198986 | orchestrator | 2025-09-04 01:03:28 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:28.204228 | orchestrator | 2025-09-04 01:03:28 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:28.204254 | orchestrator | 2025-09-04 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:31.250371 | orchestrator | 2025-09-04 01:03:31 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:31.254472 | orchestrator | 2025-09-04 01:03:31 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:31.256703 | orchestrator | 2025-09-04 01:03:31 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:31.257690 | orchestrator | 2025-09-04 01:03:31 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:31.257977 | orchestrator | 2025-09-04 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:34.303614 | orchestrator | 2025-09-04 01:03:34 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:34.304376 | orchestrator | 2025-09-04 01:03:34 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:34.305095 | orchestrator | 2025-09-04 01:03:34 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:34.306119 | orchestrator | 2025-09-04 01:03:34 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:34.306146 | orchestrator | 2025-09-04 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:37.372172 | orchestrator | 2025-09-04 01:03:37 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:37.374183 | orchestrator | 2025-09-04 01:03:37 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:37.377568 | orchestrator | 2025-09-04 01:03:37 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:37.379150 | orchestrator | 2025-09-04 01:03:37 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:37.379175 | orchestrator | 2025-09-04 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:40.428046 | orchestrator | 2025-09-04 01:03:40 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:40.428805 | orchestrator | 2025-09-04 01:03:40 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:40.430284 | orchestrator | 2025-09-04 01:03:40 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:40.431657 | orchestrator | 2025-09-04 01:03:40 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:40.431869 | orchestrator | 2025-09-04 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:43.473660 | orchestrator | 2025-09-04 01:03:43 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:43.474124 | orchestrator | 2025-09-04 01:03:43 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:43.474880 | orchestrator | 2025-09-04 01:03:43 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:43.475775 | orchestrator | 2025-09-04 01:03:43 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:43.475874 | orchestrator | 2025-09-04 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:46.519177 | orchestrator | 2025-09-04 01:03:46 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:46.521669 | orchestrator | 2025-09-04 01:03:46 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:46.524592 | orchestrator | 2025-09-04 01:03:46 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:46.526473 | orchestrator | 2025-09-04 01:03:46 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:46.526614 | orchestrator | 2025-09-04 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:49.561892 | orchestrator | 2025-09-04 01:03:49 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:49.562205 | orchestrator | 2025-09-04 01:03:49 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:49.563229 | orchestrator | 2025-09-04 01:03:49 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:49.564065 | orchestrator | 2025-09-04 01:03:49 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:49.564090 | orchestrator | 2025-09-04 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:52.594609 | orchestrator | 2025-09-04 01:03:52 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:52.595322 | orchestrator | 2025-09-04 01:03:52 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:52.595710 | orchestrator | 2025-09-04 01:03:52 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:52.596500 | orchestrator | 2025-09-04 01:03:52 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:52.596627 | orchestrator | 2025-09-04 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:55.628158 | orchestrator | 2025-09-04 01:03:55 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:55.628753 | orchestrator | 2025-09-04 01:03:55 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state STARTED 2025-09-04 01:03:55.629784 | orchestrator | 2025-09-04 01:03:55 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:55.630503 | orchestrator | 2025-09-04 01:03:55 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:55.630542 | orchestrator | 2025-09-04 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:03:58.659266 | orchestrator | 2025-09-04 01:03:58 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:03:58.659691 | orchestrator | 2025-09-04 01:03:58 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:03:58.661849 | orchestrator | 2025-09-04 01:03:58 | INFO  | Task 588771ad-d6da-4ae9-8411-a27f58468917 is in state SUCCESS 2025-09-04 01:03:58.664046 | orchestrator | 2025-09-04 01:03:58.664122 | orchestrator | 2025-09-04 01:03:58.664138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:03:58.664261 | orchestrator | 2025-09-04 01:03:58.664289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:03:58.664307 | orchestrator | Thursday 04 September 2025 01:00:08 +0000 (0:00:00.504) 0:00:00.504 **** 2025-09-04 01:03:58.664325 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:03:58.664353 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:03:58.664372 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:03:58.664390 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:03:58.664406 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:03:58.664422 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:03:58.664439 | orchestrator | 2025-09-04 01:03:58.664456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:03:58.664474 | orchestrator | Thursday 04 September 2025 01:00:09 +0000 (0:00:00.716) 0:00:01.221 **** 2025-09-04 01:03:58.664491 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-04 01:03:58.665602 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-04 01:03:58.665633 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-04 01:03:58.665653 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-04 01:03:58.665672 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-04 01:03:58.665692 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-04 01:03:58.665710 | orchestrator | 2025-09-04 01:03:58.665729 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-04 01:03:58.665749 | orchestrator | 2025-09-04 01:03:58.665768 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-04 01:03:58.665787 | orchestrator | Thursday 04 September 2025 01:00:10 +0000 (0:00:00.980) 0:00:02.201 **** 2025-09-04 01:03:58.665807 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:03:58.665858 | orchestrator | 2025-09-04 01:03:58.665878 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-04 01:03:58.665897 | orchestrator | Thursday 04 September 2025 01:00:11 +0000 (0:00:01.304) 0:00:03.506 **** 2025-09-04 01:03:58.665944 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-04 01:03:58.665965 | orchestrator | 2025-09-04 01:03:58.665984 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-04 01:03:58.666003 | orchestrator | Thursday 04 September 2025 01:00:14 +0000 (0:00:02.988) 0:00:06.495 **** 2025-09-04 01:03:58.666086 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-04 01:03:58.666111 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-04 01:03:58.666130 | orchestrator | 2025-09-04 01:03:58.666150 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-04 01:03:58.666171 | orchestrator | Thursday 04 September 2025 01:00:21 +0000 (0:00:06.961) 0:00:13.457 **** 2025-09-04 01:03:58.666193 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:03:58.666213 | orchestrator | 2025-09-04 01:03:58.666236 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-04 01:03:58.666256 | orchestrator | Thursday 04 September 2025 01:00:25 +0000 (0:00:03.430) 0:00:16.888 **** 2025-09-04 01:03:58.666276 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:03:58.666298 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-04 01:03:58.666318 | orchestrator | 2025-09-04 01:03:58.666338 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-04 01:03:58.666358 | orchestrator | Thursday 04 September 2025 01:00:29 +0000 (0:00:04.375) 0:00:21.263 **** 2025-09-04 01:03:58.666378 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:03:58.666396 | orchestrator | 2025-09-04 01:03:58.666416 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-04 01:03:58.666438 | orchestrator | Thursday 04 September 2025 01:00:33 +0000 (0:00:03.352) 0:00:24.616 **** 2025-09-04 01:03:58.666458 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-04 01:03:58.666479 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-04 01:03:58.666500 | orchestrator | 2025-09-04 01:03:58.666518 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-04 01:03:58.666537 | orchestrator | Thursday 04 September 2025 01:00:41 +0000 (0:00:08.347) 0:00:32.964 **** 2025-09-04 01:03:58.666579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.666689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.666731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.666774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.666992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.667013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.667041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.667062 | orchestrator | 2025-09-04 01:03:58.667131 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-04 01:03:58.667154 | orchestrator | Thursday 04 September 2025 01:00:44 +0000 (0:00:03.231) 0:00:36.196 **** 2025-09-04 01:03:58.667184 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.667204 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.667224 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.667243 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.667262 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.667280 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.667299 | orchestrator | 2025-09-04 01:03:58.667319 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-04 01:03:58.667339 | orchestrator | Thursday 04 September 2025 01:00:45 +0000 (0:00:00.732) 0:00:36.928 **** 2025-09-04 01:03:58.667357 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.667376 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.667395 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.667415 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:03:58.667435 | orchestrator | 2025-09-04 01:03:58.667454 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-04 01:03:58.667473 | orchestrator | Thursday 04 September 2025 01:00:46 +0000 (0:00:00.942) 0:00:37.870 **** 2025-09-04 01:03:58.667491 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-04 01:03:58.667511 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-04 01:03:58.667531 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-04 01:03:58.667550 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-04 01:03:58.667569 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-04 01:03:58.667587 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-04 01:03:58.667605 | orchestrator | 2025-09-04 01:03:58.667624 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-04 01:03:58.667644 | orchestrator | Thursday 04 September 2025 01:00:47 +0000 (0:00:01.622) 0:00:39.492 **** 2025-09-04 01:03:58.667665 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667687 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667716 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667798 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667823 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667845 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-04 01:03:58.667866 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.667893 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.668018 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.668046 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.668068 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.668089 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-04 01:03:58.668108 | orchestrator | 2025-09-04 01:03:58.668128 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-04 01:03:58.668159 | orchestrator | Thursday 04 September 2025 01:00:52 +0000 (0:00:04.427) 0:00:43.920 **** 2025-09-04 01:03:58.668177 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:03:58.668197 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:03:58.668217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-04 01:03:58.668236 | orchestrator | 2025-09-04 01:03:58.668255 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-04 01:03:58.668274 | orchestrator | Thursday 04 September 2025 01:00:54 +0000 (0:00:02.062) 0:00:45.982 **** 2025-09-04 01:03:58.668294 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-04 01:03:58.668314 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-04 01:03:58.668340 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-04 01:03:58.668359 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 01:03:58.668378 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 01:03:58.668449 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-04 01:03:58.668470 | orchestrator | 2025-09-04 01:03:58.668490 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-04 01:03:58.668509 | orchestrator | Thursday 04 September 2025 01:00:57 +0000 (0:00:03.098) 0:00:49.081 **** 2025-09-04 01:03:58.668528 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-04 01:03:58.668546 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-04 01:03:58.668565 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-04 01:03:58.668584 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-04 01:03:58.668603 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-04 01:03:58.668622 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-04 01:03:58.668641 | orchestrator | 2025-09-04 01:03:58.668660 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-04 01:03:58.668679 | orchestrator | Thursday 04 September 2025 01:00:58 +0000 (0:00:01.045) 0:00:50.126 **** 2025-09-04 01:03:58.668699 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.668718 | orchestrator | 2025-09-04 01:03:58.668736 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-04 01:03:58.668755 | orchestrator | Thursday 04 September 2025 01:00:58 +0000 (0:00:00.150) 0:00:50.277 **** 2025-09-04 01:03:58.668773 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.668793 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.668812 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.668831 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.668850 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.668869 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.668888 | orchestrator | 2025-09-04 01:03:58.668906 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-04 01:03:58.668955 | orchestrator | Thursday 04 September 2025 01:00:59 +0000 (0:00:00.609) 0:00:50.886 **** 2025-09-04 01:03:58.668977 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:03:58.668998 | orchestrator | 2025-09-04 01:03:58.669016 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-04 01:03:58.669035 | orchestrator | Thursday 04 September 2025 01:01:00 +0000 (0:00:01.204) 0:00:52.091 **** 2025-09-04 01:03:58.669056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.669099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.669181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.669205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.669475 | orchestrator | 2025-09-04 01:03:58.669495 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-04 01:03:58.669515 | orchestrator | Thursday 04 September 2025 01:01:03 +0000 (0:00:03.151) 0:00:55.242 **** 2025-09-04 01:03:58.669534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.669570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669592 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.669614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.669633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669665 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.669686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.669706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669726 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.669752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669806 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.669824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669876 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.669896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.669962 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.669980 | orchestrator | 2025-09-04 01:03:58.670007 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-04 01:03:58.670072 | orchestrator | Thursday 04 September 2025 01:01:05 +0000 (0:00:01.752) 0:00:56.994 **** 2025-09-04 01:03:58.670105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.670144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.670183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670201 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.670218 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.670236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.670271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670289 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.670307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670353 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.670371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670406 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.670439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.670485 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.670502 | orchestrator | 2025-09-04 01:03:58.670519 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-04 01:03:58.670536 | orchestrator | Thursday 04 September 2025 01:01:07 +0000 (0:00:02.029) 0:00:59.024 **** 2025-09-04 01:03:58.670554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.670573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.670590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.670691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.670821 | orchestrator | 2025-09-04 01:03:58.670838 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-04 01:03:58.670856 | orchestrator | Thursday 04 September 2025 01:01:10 +0000 (0:00:03.380) 0:01:02.405 **** 2025-09-04 01:03:58.670874 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-04 01:03:58.670890 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.670907 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-04 01:03:58.670981 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.670999 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-04 01:03:58.671016 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.671033 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-04 01:03:58.671051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-04 01:03:58.671068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-04 01:03:58.671084 | orchestrator | 2025-09-04 01:03:58.671101 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-04 01:03:58.671117 | orchestrator | Thursday 04 September 2025 01:01:13 +0000 (0:00:02.221) 0:01:04.626 **** 2025-09-04 01:03:58.671142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.671180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.671199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.671217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.671417 | orchestrator | 2025-09-04 01:03:58.671435 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-04 01:03:58.671452 | orchestrator | Thursday 04 September 2025 01:01:23 +0000 (0:00:10.830) 0:01:15.457 **** 2025-09-04 01:03:58.671475 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.671494 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.671511 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.671528 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:58.671545 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:58.671561 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:58.671577 | orchestrator | 2025-09-04 01:03:58.671593 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-04 01:03:58.671608 | orchestrator | Thursday 04 September 2025 01:01:27 +0000 (0:00:03.138) 0:01:18.595 **** 2025-09-04 01:03:58.671624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.671642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671658 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.671674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.671699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671714 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.671745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-04 01:03:58.671762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671811 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.671827 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.671859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.671899 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.671990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.672013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-04 01:03:58.672053 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.672094 | orchestrator | 2025-09-04 01:03:58.672115 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-04 01:03:58.672132 | orchestrator | Thursday 04 September 2025 01:01:28 +0000 (0:00:01.779) 0:01:20.374 **** 2025-09-04 01:03:58.672145 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.672158 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.672170 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.672183 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.672196 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.672209 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.672222 | orchestrator | 2025-09-04 01:03:58.672235 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-04 01:03:58.672259 | orchestrator | Thursday 04 September 2025 01:01:29 +0000 (0:00:00.744) 0:01:21.118 **** 2025-09-04 01:03:58.672273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.672340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.672355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-04 01:03:58.672377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-04 01:03:58.672497 | orchestrator | 2025-09-04 01:03:58.672509 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-04 01:03:58.672520 | orchestrator | Thursday 04 September 2025 01:01:32 +0000 (0:00:02.873) 0:01:23.992 **** 2025-09-04 01:03:58.672534 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.672545 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:03:58.672558 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:03:58.672572 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:03:58.672585 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:03:58.672598 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:03:58.672612 | orchestrator | 2025-09-04 01:03:58.672625 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-04 01:03:58.672639 | orchestrator | Thursday 04 September 2025 01:01:32 +0000 (0:00:00.492) 0:01:24.484 **** 2025-09-04 01:03:58.672652 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:58.672665 | orchestrator | 2025-09-04 01:03:58.672678 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-04 01:03:58.672692 | orchestrator | Thursday 04 September 2025 01:01:35 +0000 (0:00:02.574) 0:01:27.059 **** 2025-09-04 01:03:58.672705 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:58.672718 | orchestrator | 2025-09-04 01:03:58.672732 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-04 01:03:58.672752 | orchestrator | Thursday 04 September 2025 01:01:37 +0000 (0:00:02.220) 0:01:29.280 **** 2025-09-04 01:03:58.672766 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:58.672779 | orchestrator | 2025-09-04 01:03:58.672793 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.672807 | orchestrator | Thursday 04 September 2025 01:01:55 +0000 (0:00:18.063) 0:01:47.343 **** 2025-09-04 01:03:58.672819 | orchestrator | 2025-09-04 01:03:58.672839 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.672853 | orchestrator | Thursday 04 September 2025 01:01:55 +0000 (0:00:00.066) 0:01:47.409 **** 2025-09-04 01:03:58.672867 | orchestrator | 2025-09-04 01:03:58.672880 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.672894 | orchestrator | Thursday 04 September 2025 01:01:55 +0000 (0:00:00.061) 0:01:47.471 **** 2025-09-04 01:03:58.672908 | orchestrator | 2025-09-04 01:03:58.672940 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.672954 | orchestrator | Thursday 04 September 2025 01:01:56 +0000 (0:00:00.067) 0:01:47.538 **** 2025-09-04 01:03:58.672966 | orchestrator | 2025-09-04 01:03:58.672978 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.673002 | orchestrator | Thursday 04 September 2025 01:01:56 +0000 (0:00:00.065) 0:01:47.604 **** 2025-09-04 01:03:58.673016 | orchestrator | 2025-09-04 01:03:58.673029 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-04 01:03:58.673043 | orchestrator | Thursday 04 September 2025 01:01:56 +0000 (0:00:00.064) 0:01:47.669 **** 2025-09-04 01:03:58.673056 | orchestrator | 2025-09-04 01:03:58.673070 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-04 01:03:58.673082 | orchestrator | Thursday 04 September 2025 01:01:56 +0000 (0:00:00.069) 0:01:47.738 **** 2025-09-04 01:03:58.673094 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:58.673106 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:58.673117 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:58.673131 | orchestrator | 2025-09-04 01:03:58.673144 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-04 01:03:58.673156 | orchestrator | Thursday 04 September 2025 01:02:17 +0000 (0:00:21.610) 0:02:09.348 **** 2025-09-04 01:03:58.673170 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:03:58.673184 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:03:58.673196 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:03:58.673213 | orchestrator | 2025-09-04 01:03:58.673226 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-04 01:03:58.673239 | orchestrator | Thursday 04 September 2025 01:02:30 +0000 (0:00:13.182) 0:02:22.531 **** 2025-09-04 01:03:58.673252 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:58.673266 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:58.673278 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:58.673291 | orchestrator | 2025-09-04 01:03:58.673304 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-04 01:03:58.673318 | orchestrator | Thursday 04 September 2025 01:03:47 +0000 (0:01:16.333) 0:03:38.865 **** 2025-09-04 01:03:58.673329 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:03:58.673341 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:03:58.673354 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:03:58.673367 | orchestrator | 2025-09-04 01:03:58.673380 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-04 01:03:58.673393 | orchestrator | Thursday 04 September 2025 01:03:54 +0000 (0:00:07.235) 0:03:46.100 **** 2025-09-04 01:03:58.673405 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:03:58.673418 | orchestrator | 2025-09-04 01:03:58.673430 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:03:58.673444 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-04 01:03:58.673459 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-04 01:03:58.673471 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-04 01:03:58.673484 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-04 01:03:58.673497 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-04 01:03:58.673511 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-04 01:03:58.673523 | orchestrator | 2025-09-04 01:03:58.673537 | orchestrator | 2025-09-04 01:03:58.673550 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:03:58.673563 | orchestrator | Thursday 04 September 2025 01:03:55 +0000 (0:00:01.038) 0:03:47.139 **** 2025-09-04 01:03:58.673576 | orchestrator | =============================================================================== 2025-09-04 01:03:58.673600 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 76.33s 2025-09-04 01:03:58.673615 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.61s 2025-09-04 01:03:58.673628 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.06s 2025-09-04 01:03:58.673641 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.18s 2025-09-04 01:03:58.673654 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.83s 2025-09-04 01:03:58.673667 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.35s 2025-09-04 01:03:58.673680 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 7.24s 2025-09-04 01:03:58.673694 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.96s 2025-09-04 01:03:58.673719 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.43s 2025-09-04 01:03:58.673733 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.38s 2025-09-04 01:03:58.673745 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.43s 2025-09-04 01:03:58.673756 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.38s 2025-09-04 01:03:58.673770 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.35s 2025-09-04 01:03:58.673783 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.23s 2025-09-04 01:03:58.673796 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.15s 2025-09-04 01:03:58.673809 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.14s 2025-09-04 01:03:58.673822 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.10s 2025-09-04 01:03:58.673836 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.99s 2025-09-04 01:03:58.673849 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.87s 2025-09-04 01:03:58.673863 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.57s 2025-09-04 01:03:58.673876 | orchestrator | 2025-09-04 01:03:58 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:03:58.673889 | orchestrator | 2025-09-04 01:03:58 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:03:58.673902 | orchestrator | 2025-09-04 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:01.703737 | orchestrator | 2025-09-04 01:04:01 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:01.704541 | orchestrator | 2025-09-04 01:04:01 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:01.706010 | orchestrator | 2025-09-04 01:04:01 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:01.708053 | orchestrator | 2025-09-04 01:04:01 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:01.708742 | orchestrator | 2025-09-04 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:04.738750 | orchestrator | 2025-09-04 01:04:04 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:04.739990 | orchestrator | 2025-09-04 01:04:04 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:04.740699 | orchestrator | 2025-09-04 01:04:04 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:04.741316 | orchestrator | 2025-09-04 01:04:04 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:04.741339 | orchestrator | 2025-09-04 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:07.775028 | orchestrator | 2025-09-04 01:04:07 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:07.775592 | orchestrator | 2025-09-04 01:04:07 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:07.776363 | orchestrator | 2025-09-04 01:04:07 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:07.778272 | orchestrator | 2025-09-04 01:04:07 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:07.778308 | orchestrator | 2025-09-04 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:10.807668 | orchestrator | 2025-09-04 01:04:10 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:10.810772 | orchestrator | 2025-09-04 01:04:10 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:10.811880 | orchestrator | 2025-09-04 01:04:10 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:10.813063 | orchestrator | 2025-09-04 01:04:10 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:10.813093 | orchestrator | 2025-09-04 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:13.837490 | orchestrator | 2025-09-04 01:04:13 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:13.837701 | orchestrator | 2025-09-04 01:04:13 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:13.839277 | orchestrator | 2025-09-04 01:04:13 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:13.839889 | orchestrator | 2025-09-04 01:04:13 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:13.840079 | orchestrator | 2025-09-04 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:16.867156 | orchestrator | 2025-09-04 01:04:16 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:16.867375 | orchestrator | 2025-09-04 01:04:16 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:16.868161 | orchestrator | 2025-09-04 01:04:16 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:16.869001 | orchestrator | 2025-09-04 01:04:16 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:16.869024 | orchestrator | 2025-09-04 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:19.892048 | orchestrator | 2025-09-04 01:04:19 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:19.892475 | orchestrator | 2025-09-04 01:04:19 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:19.893175 | orchestrator | 2025-09-04 01:04:19 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:19.893710 | orchestrator | 2025-09-04 01:04:19 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:19.893832 | orchestrator | 2025-09-04 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:22.927884 | orchestrator | 2025-09-04 01:04:22 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:22.929054 | orchestrator | 2025-09-04 01:04:22 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:22.929730 | orchestrator | 2025-09-04 01:04:22 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:22.931880 | orchestrator | 2025-09-04 01:04:22 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:22.931938 | orchestrator | 2025-09-04 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:25.962431 | orchestrator | 2025-09-04 01:04:25 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:25.963120 | orchestrator | 2025-09-04 01:04:25 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:25.964819 | orchestrator | 2025-09-04 01:04:25 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:25.965710 | orchestrator | 2025-09-04 01:04:25 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:25.965735 | orchestrator | 2025-09-04 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:28.995663 | orchestrator | 2025-09-04 01:04:28 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:28.996671 | orchestrator | 2025-09-04 01:04:28 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:28.998187 | orchestrator | 2025-09-04 01:04:28 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:28.998949 | orchestrator | 2025-09-04 01:04:28 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:28.998980 | orchestrator | 2025-09-04 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:32.024729 | orchestrator | 2025-09-04 01:04:32 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:32.026179 | orchestrator | 2025-09-04 01:04:32 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:32.026814 | orchestrator | 2025-09-04 01:04:32 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:32.027610 | orchestrator | 2025-09-04 01:04:32 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:32.027634 | orchestrator | 2025-09-04 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:35.050379 | orchestrator | 2025-09-04 01:04:35 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:35.051411 | orchestrator | 2025-09-04 01:04:35 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:35.051522 | orchestrator | 2025-09-04 01:04:35 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:35.052730 | orchestrator | 2025-09-04 01:04:35 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:35.053017 | orchestrator | 2025-09-04 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:38.086093 | orchestrator | 2025-09-04 01:04:38 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:38.086488 | orchestrator | 2025-09-04 01:04:38 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:38.087498 | orchestrator | 2025-09-04 01:04:38 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:38.088384 | orchestrator | 2025-09-04 01:04:38 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:38.088408 | orchestrator | 2025-09-04 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:41.117724 | orchestrator | 2025-09-04 01:04:41 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:41.119266 | orchestrator | 2025-09-04 01:04:41 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:41.120089 | orchestrator | 2025-09-04 01:04:41 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:41.121944 | orchestrator | 2025-09-04 01:04:41 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:41.121969 | orchestrator | 2025-09-04 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:44.151699 | orchestrator | 2025-09-04 01:04:44 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:44.151815 | orchestrator | 2025-09-04 01:04:44 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:44.152529 | orchestrator | 2025-09-04 01:04:44 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:44.153228 | orchestrator | 2025-09-04 01:04:44 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:44.153260 | orchestrator | 2025-09-04 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:47.177324 | orchestrator | 2025-09-04 01:04:47 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:47.177609 | orchestrator | 2025-09-04 01:04:47 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:47.178370 | orchestrator | 2025-09-04 01:04:47 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:47.179106 | orchestrator | 2025-09-04 01:04:47 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:47.179119 | orchestrator | 2025-09-04 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:50.203830 | orchestrator | 2025-09-04 01:04:50 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:50.205111 | orchestrator | 2025-09-04 01:04:50 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:50.205964 | orchestrator | 2025-09-04 01:04:50 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:50.206581 | orchestrator | 2025-09-04 01:04:50 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:50.207410 | orchestrator | 2025-09-04 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:53.276273 | orchestrator | 2025-09-04 01:04:53 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:53.277096 | orchestrator | 2025-09-04 01:04:53 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:53.278968 | orchestrator | 2025-09-04 01:04:53 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:53.279669 | orchestrator | 2025-09-04 01:04:53 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:53.279693 | orchestrator | 2025-09-04 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:56.353006 | orchestrator | 2025-09-04 01:04:56 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:56.353479 | orchestrator | 2025-09-04 01:04:56 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:56.354869 | orchestrator | 2025-09-04 01:04:56 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:56.356115 | orchestrator | 2025-09-04 01:04:56 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:56.356139 | orchestrator | 2025-09-04 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:04:59.376299 | orchestrator | 2025-09-04 01:04:59 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:04:59.376594 | orchestrator | 2025-09-04 01:04:59 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:04:59.377119 | orchestrator | 2025-09-04 01:04:59 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:04:59.377745 | orchestrator | 2025-09-04 01:04:59 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:04:59.377787 | orchestrator | 2025-09-04 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:02.403397 | orchestrator | 2025-09-04 01:05:02 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:02.404396 | orchestrator | 2025-09-04 01:05:02 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:02.404962 | orchestrator | 2025-09-04 01:05:02 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:02.405594 | orchestrator | 2025-09-04 01:05:02 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:02.405620 | orchestrator | 2025-09-04 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:05.424175 | orchestrator | 2025-09-04 01:05:05 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:05.424378 | orchestrator | 2025-09-04 01:05:05 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:05.425029 | orchestrator | 2025-09-04 01:05:05 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:05.425654 | orchestrator | 2025-09-04 01:05:05 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:05.425675 | orchestrator | 2025-09-04 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:08.443538 | orchestrator | 2025-09-04 01:05:08 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:08.444439 | orchestrator | 2025-09-04 01:05:08 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:08.445941 | orchestrator | 2025-09-04 01:05:08 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:08.446581 | orchestrator | 2025-09-04 01:05:08 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:08.446732 | orchestrator | 2025-09-04 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:11.479280 | orchestrator | 2025-09-04 01:05:11 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:11.479738 | orchestrator | 2025-09-04 01:05:11 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:11.480142 | orchestrator | 2025-09-04 01:05:11 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:11.480761 | orchestrator | 2025-09-04 01:05:11 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:11.480783 | orchestrator | 2025-09-04 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:14.498183 | orchestrator | 2025-09-04 01:05:14 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:14.498372 | orchestrator | 2025-09-04 01:05:14 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:14.498994 | orchestrator | 2025-09-04 01:05:14 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:14.499469 | orchestrator | 2025-09-04 01:05:14 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:14.499524 | orchestrator | 2025-09-04 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:17.520202 | orchestrator | 2025-09-04 01:05:17 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:17.520422 | orchestrator | 2025-09-04 01:05:17 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:17.521012 | orchestrator | 2025-09-04 01:05:17 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:17.521602 | orchestrator | 2025-09-04 01:05:17 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:17.521647 | orchestrator | 2025-09-04 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:20.547809 | orchestrator | 2025-09-04 01:05:20 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:20.548440 | orchestrator | 2025-09-04 01:05:20 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:20.549547 | orchestrator | 2025-09-04 01:05:20 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:20.550327 | orchestrator | 2025-09-04 01:05:20 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:20.550352 | orchestrator | 2025-09-04 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:23.573740 | orchestrator | 2025-09-04 01:05:23 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:23.573845 | orchestrator | 2025-09-04 01:05:23 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:23.573902 | orchestrator | 2025-09-04 01:05:23 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:23.574536 | orchestrator | 2025-09-04 01:05:23 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:23.574626 | orchestrator | 2025-09-04 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:26.602512 | orchestrator | 2025-09-04 01:05:26 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:26.602963 | orchestrator | 2025-09-04 01:05:26 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:26.603686 | orchestrator | 2025-09-04 01:05:26 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:26.605225 | orchestrator | 2025-09-04 01:05:26 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state STARTED 2025-09-04 01:05:26.607551 | orchestrator | 2025-09-04 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:29.648125 | orchestrator | 2025-09-04 01:05:29 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:29.650153 | orchestrator | 2025-09-04 01:05:29 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:29.651361 | orchestrator | 2025-09-04 01:05:29 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:29.654314 | orchestrator | 2025-09-04 01:05:29 | INFO  | Task 17452f1b-e01c-42f8-918a-2d3cd4f63dca is in state SUCCESS 2025-09-04 01:05:29.654517 | orchestrator | 2025-09-04 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:29.655944 | orchestrator | 2025-09-04 01:05:29.656122 | orchestrator | 2025-09-04 01:05:29.656141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:05:29.656152 | orchestrator | 2025-09-04 01:05:29.656163 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:05:29.656199 | orchestrator | Thursday 04 September 2025 01:03:25 +0000 (0:00:00.273) 0:00:00.273 **** 2025-09-04 01:05:29.656211 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:05:29.656488 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:05:29.656504 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:05:29.656515 | orchestrator | 2025-09-04 01:05:29.656526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:05:29.656537 | orchestrator | Thursday 04 September 2025 01:03:26 +0000 (0:00:00.302) 0:00:00.576 **** 2025-09-04 01:05:29.656548 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-04 01:05:29.656560 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-04 01:05:29.656570 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-04 01:05:29.656581 | orchestrator | 2025-09-04 01:05:29.656592 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-04 01:05:29.656603 | orchestrator | 2025-09-04 01:05:29.656614 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-04 01:05:29.656624 | orchestrator | Thursday 04 September 2025 01:03:26 +0000 (0:00:00.437) 0:00:01.013 **** 2025-09-04 01:05:29.656636 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:05:29.656648 | orchestrator | 2025-09-04 01:05:29.656658 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-04 01:05:29.656674 | orchestrator | Thursday 04 September 2025 01:03:27 +0000 (0:00:00.529) 0:00:01.543 **** 2025-09-04 01:05:29.656693 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-04 01:05:29.656707 | orchestrator | 2025-09-04 01:05:29.656718 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-04 01:05:29.656729 | orchestrator | Thursday 04 September 2025 01:03:30 +0000 (0:00:03.390) 0:00:04.934 **** 2025-09-04 01:05:29.656739 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-04 01:05:29.656750 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-04 01:05:29.656761 | orchestrator | 2025-09-04 01:05:29.656788 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-04 01:05:29.656799 | orchestrator | Thursday 04 September 2025 01:03:36 +0000 (0:00:06.359) 0:00:11.293 **** 2025-09-04 01:05:29.656810 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:05:29.656821 | orchestrator | 2025-09-04 01:05:29.656832 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-04 01:05:29.656842 | orchestrator | Thursday 04 September 2025 01:03:40 +0000 (0:00:03.342) 0:00:14.635 **** 2025-09-04 01:05:29.656885 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:05:29.656918 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-04 01:05:29.656930 | orchestrator | 2025-09-04 01:05:29.656940 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-04 01:05:29.656951 | orchestrator | Thursday 04 September 2025 01:03:44 +0000 (0:00:03.934) 0:00:18.570 **** 2025-09-04 01:05:29.656962 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:05:29.656973 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-04 01:05:29.656983 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-04 01:05:29.656994 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-04 01:05:29.657005 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-04 01:05:29.657016 | orchestrator | 2025-09-04 01:05:29.657027 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-04 01:05:29.657037 | orchestrator | Thursday 04 September 2025 01:04:00 +0000 (0:00:16.708) 0:00:35.279 **** 2025-09-04 01:05:29.657048 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-04 01:05:29.657059 | orchestrator | 2025-09-04 01:05:29.657081 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-04 01:05:29.657094 | orchestrator | Thursday 04 September 2025 01:04:05 +0000 (0:00:05.085) 0:00:40.364 **** 2025-09-04 01:05:29.657111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657276 | orchestrator | 2025-09-04 01:05:29.657289 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-04 01:05:29.657302 | orchestrator | Thursday 04 September 2025 01:04:08 +0000 (0:00:03.024) 0:00:43.388 **** 2025-09-04 01:05:29.657315 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-04 01:05:29.657329 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-04 01:05:29.657341 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-04 01:05:29.657354 | orchestrator | 2025-09-04 01:05:29.657373 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-04 01:05:29.657387 | orchestrator | Thursday 04 September 2025 01:04:10 +0000 (0:00:01.485) 0:00:44.874 **** 2025-09-04 01:05:29.657399 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.657412 | orchestrator | 2025-09-04 01:05:29.657425 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-04 01:05:29.657438 | orchestrator | Thursday 04 September 2025 01:04:10 +0000 (0:00:00.088) 0:00:44.963 **** 2025-09-04 01:05:29.657450 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.657461 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.657479 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.657490 | orchestrator | 2025-09-04 01:05:29.657500 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-04 01:05:29.657511 | orchestrator | Thursday 04 September 2025 01:04:10 +0000 (0:00:00.316) 0:00:45.279 **** 2025-09-04 01:05:29.657522 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:05:29.657533 | orchestrator | 2025-09-04 01:05:29.657544 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-04 01:05:29.657554 | orchestrator | Thursday 04 September 2025 01:04:11 +0000 (0:00:00.531) 0:00:45.811 **** 2025-09-04 01:05:29.657566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.657613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.657695 | orchestrator | 2025-09-04 01:05:29.657706 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-04 01:05:29.657718 | orchestrator | Thursday 04 September 2025 01:04:15 +0000 (0:00:04.022) 0:00:49.834 **** 2025-09-04 01:05:29.657734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.657751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657775 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.657793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.657805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657833 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.657850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.657880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657903 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.657914 | orchestrator | 2025-09-04 01:05:29.657926 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-04 01:05:29.657937 | orchestrator | Thursday 04 September 2025 01:04:16 +0000 (0:00:01.337) 0:00:51.172 **** 2025-09-04 01:05:29.657956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.657968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.657995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658007 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.658073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.658086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658108 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.658128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.658146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658174 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.658185 | orchestrator | 2025-09-04 01:05:29.658196 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-04 01:05:29.658207 | orchestrator | Thursday 04 September 2025 01:04:17 +0000 (0:00:00.899) 0:00:52.071 **** 2025-09-04 01:05:29.658219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658351 | orchestrator | 2025-09-04 01:05:29.658362 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-04 01:05:29.658373 | orchestrator | Thursday 04 September 2025 01:04:21 +0000 (0:00:03.894) 0:00:55.966 **** 2025-09-04 01:05:29.658384 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.658395 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:05:29.658406 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:05:29.658417 | orchestrator | 2025-09-04 01:05:29.658428 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-04 01:05:29.658438 | orchestrator | Thursday 04 September 2025 01:04:24 +0000 (0:00:03.083) 0:00:59.050 **** 2025-09-04 01:05:29.658449 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:05:29.658460 | orchestrator | 2025-09-04 01:05:29.658471 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-04 01:05:29.658482 | orchestrator | Thursday 04 September 2025 01:04:26 +0000 (0:00:02.082) 0:01:01.132 **** 2025-09-04 01:05:29.658492 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.658503 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.658514 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.658525 | orchestrator | 2025-09-04 01:05:29.658536 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-04 01:05:29.658546 | orchestrator | Thursday 04 September 2025 01:04:27 +0000 (0:00:00.600) 0:01:01.733 **** 2025-09-04 01:05:29.658562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.658610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.658689 | orchestrator | 2025-09-04 01:05:29.658700 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-04 01:05:29.658711 | orchestrator | Thursday 04 September 2025 01:04:36 +0000 (0:00:09.309) 0:01:11.042 **** 2025-09-04 01:05:29.658729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.658741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658768 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.658780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.658791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658826 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.658837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-04 01:05:29.658945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:05:29.658973 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.658984 | orchestrator | 2025-09-04 01:05:29.658995 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-04 01:05:29.659006 | orchestrator | Thursday 04 September 2025 01:04:38 +0000 (0:00:01.384) 0:01:12.429 **** 2025-09-04 01:05:29.659018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.659044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.659056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-04 01:05:29.659072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:05:29.659160 | orchestrator | 2025-09-04 01:05:29.659171 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-04 01:05:29.659182 | orchestrator | Thursday 04 September 2025 01:04:41 +0000 (0:00:03.273) 0:01:15.702 **** 2025-09-04 01:05:29.659193 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:05:29.659204 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:05:29.659215 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:05:29.659226 | orchestrator | 2025-09-04 01:05:29.659237 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-04 01:05:29.659248 | orchestrator | Thursday 04 September 2025 01:04:41 +0000 (0:00:00.274) 0:01:15.977 **** 2025-09-04 01:05:29.659258 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659269 | orchestrator | 2025-09-04 01:05:29.659280 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-04 01:05:29.659290 | orchestrator | Thursday 04 September 2025 01:04:43 +0000 (0:00:02.197) 0:01:18.174 **** 2025-09-04 01:05:29.659301 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659312 | orchestrator | 2025-09-04 01:05:29.659326 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-04 01:05:29.659336 | orchestrator | Thursday 04 September 2025 01:04:46 +0000 (0:00:02.454) 0:01:20.629 **** 2025-09-04 01:05:29.659346 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659356 | orchestrator | 2025-09-04 01:05:29.659365 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-04 01:05:29.659375 | orchestrator | Thursday 04 September 2025 01:04:57 +0000 (0:00:11.710) 0:01:32.339 **** 2025-09-04 01:05:29.659385 | orchestrator | 2025-09-04 01:05:29.659394 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-04 01:05:29.659404 | orchestrator | Thursday 04 September 2025 01:04:58 +0000 (0:00:00.133) 0:01:32.473 **** 2025-09-04 01:05:29.659413 | orchestrator | 2025-09-04 01:05:29.659429 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-04 01:05:29.659438 | orchestrator | Thursday 04 September 2025 01:04:58 +0000 (0:00:00.125) 0:01:32.598 **** 2025-09-04 01:05:29.659448 | orchestrator | 2025-09-04 01:05:29.659458 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-04 01:05:29.659467 | orchestrator | Thursday 04 September 2025 01:04:58 +0000 (0:00:00.125) 0:01:32.723 **** 2025-09-04 01:05:29.659477 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659486 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:05:29.659496 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:05:29.659505 | orchestrator | 2025-09-04 01:05:29.659515 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-04 01:05:29.659525 | orchestrator | Thursday 04 September 2025 01:05:11 +0000 (0:00:12.755) 0:01:45.479 **** 2025-09-04 01:05:29.659534 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659544 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:05:29.659553 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:05:29.659563 | orchestrator | 2025-09-04 01:05:29.659572 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-04 01:05:29.659582 | orchestrator | Thursday 04 September 2025 01:05:18 +0000 (0:00:07.115) 0:01:52.594 **** 2025-09-04 01:05:29.659592 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:05:29.659601 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:05:29.659611 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:05:29.659620 | orchestrator | 2025-09-04 01:05:29.659630 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:05:29.659640 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:05:29.659651 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:05:29.659661 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:05:29.659671 | orchestrator | 2025-09-04 01:05:29.659681 | orchestrator | 2025-09-04 01:05:29.659690 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:05:29.659700 | orchestrator | Thursday 04 September 2025 01:05:29 +0000 (0:00:10.964) 0:02:03.559 **** 2025-09-04 01:05:29.659709 | orchestrator | =============================================================================== 2025-09-04 01:05:29.659719 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.71s 2025-09-04 01:05:29.659733 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.76s 2025-09-04 01:05:29.659743 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.71s 2025-09-04 01:05:29.659753 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.97s 2025-09-04 01:05:29.659762 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.31s 2025-09-04 01:05:29.659772 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.12s 2025-09-04 01:05:29.659781 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.36s 2025-09-04 01:05:29.659791 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.09s 2025-09-04 01:05:29.659801 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.02s 2025-09-04 01:05:29.659810 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2025-09-04 01:05:29.659819 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.89s 2025-09-04 01:05:29.659829 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.39s 2025-09-04 01:05:29.659838 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.34s 2025-09-04 01:05:29.659870 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.27s 2025-09-04 01:05:29.659881 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.08s 2025-09-04 01:05:29.659890 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.02s 2025-09-04 01:05:29.659900 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.45s 2025-09-04 01:05:29.659910 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.20s 2025-09-04 01:05:29.659919 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.08s 2025-09-04 01:05:29.659929 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.49s 2025-09-04 01:05:32.689084 | orchestrator | 2025-09-04 01:05:32 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:32.689475 | orchestrator | 2025-09-04 01:05:32 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:32.692761 | orchestrator | 2025-09-04 01:05:32 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:32.693711 | orchestrator | 2025-09-04 01:05:32 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:32.693735 | orchestrator | 2025-09-04 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:35.734607 | orchestrator | 2025-09-04 01:05:35 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:35.735907 | orchestrator | 2025-09-04 01:05:35 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:35.737154 | orchestrator | 2025-09-04 01:05:35 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:35.738613 | orchestrator | 2025-09-04 01:05:35 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:35.738667 | orchestrator | 2025-09-04 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:38.795209 | orchestrator | 2025-09-04 01:05:38 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:38.795829 | orchestrator | 2025-09-04 01:05:38 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:38.796959 | orchestrator | 2025-09-04 01:05:38 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:38.798119 | orchestrator | 2025-09-04 01:05:38 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:38.798223 | orchestrator | 2025-09-04 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:41.848175 | orchestrator | 2025-09-04 01:05:41 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:41.848755 | orchestrator | 2025-09-04 01:05:41 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:41.849667 | orchestrator | 2025-09-04 01:05:41 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:41.852304 | orchestrator | 2025-09-04 01:05:41 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:41.852550 | orchestrator | 2025-09-04 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:44.895139 | orchestrator | 2025-09-04 01:05:44 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:44.898510 | orchestrator | 2025-09-04 01:05:44 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:44.901005 | orchestrator | 2025-09-04 01:05:44 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:44.902986 | orchestrator | 2025-09-04 01:05:44 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:44.903572 | orchestrator | 2025-09-04 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:47.949298 | orchestrator | 2025-09-04 01:05:47 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:47.950522 | orchestrator | 2025-09-04 01:05:47 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:47.951326 | orchestrator | 2025-09-04 01:05:47 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:47.953164 | orchestrator | 2025-09-04 01:05:47 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:47.953205 | orchestrator | 2025-09-04 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:50.997594 | orchestrator | 2025-09-04 01:05:50 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:50.999230 | orchestrator | 2025-09-04 01:05:50 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:51.000753 | orchestrator | 2025-09-04 01:05:50 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:51.002500 | orchestrator | 2025-09-04 01:05:51 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:51.002524 | orchestrator | 2025-09-04 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:54.037934 | orchestrator | 2025-09-04 01:05:54 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:54.038287 | orchestrator | 2025-09-04 01:05:54 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:54.039568 | orchestrator | 2025-09-04 01:05:54 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:54.040371 | orchestrator | 2025-09-04 01:05:54 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:54.040394 | orchestrator | 2025-09-04 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:05:57.083626 | orchestrator | 2025-09-04 01:05:57 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:05:57.085568 | orchestrator | 2025-09-04 01:05:57 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:05:57.087474 | orchestrator | 2025-09-04 01:05:57 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:05:57.088887 | orchestrator | 2025-09-04 01:05:57 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:05:57.089029 | orchestrator | 2025-09-04 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:00.117882 | orchestrator | 2025-09-04 01:06:00 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:00.117981 | orchestrator | 2025-09-04 01:06:00 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:00.118646 | orchestrator | 2025-09-04 01:06:00 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:00.119614 | orchestrator | 2025-09-04 01:06:00 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:00.119657 | orchestrator | 2025-09-04 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:03.152060 | orchestrator | 2025-09-04 01:06:03 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:03.153291 | orchestrator | 2025-09-04 01:06:03 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:03.154425 | orchestrator | 2025-09-04 01:06:03 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:03.155400 | orchestrator | 2025-09-04 01:06:03 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:03.155433 | orchestrator | 2025-09-04 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:06.189908 | orchestrator | 2025-09-04 01:06:06 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:06.193252 | orchestrator | 2025-09-04 01:06:06 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:06.194174 | orchestrator | 2025-09-04 01:06:06 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:06.195411 | orchestrator | 2025-09-04 01:06:06 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:06.195644 | orchestrator | 2025-09-04 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:09.242821 | orchestrator | 2025-09-04 01:06:09 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:09.243780 | orchestrator | 2025-09-04 01:06:09 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:09.245363 | orchestrator | 2025-09-04 01:06:09 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:09.246596 | orchestrator | 2025-09-04 01:06:09 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:09.246934 | orchestrator | 2025-09-04 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:12.277817 | orchestrator | 2025-09-04 01:06:12 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:12.278538 | orchestrator | 2025-09-04 01:06:12 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:12.280012 | orchestrator | 2025-09-04 01:06:12 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:12.280879 | orchestrator | 2025-09-04 01:06:12 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:12.280898 | orchestrator | 2025-09-04 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:15.314985 | orchestrator | 2025-09-04 01:06:15 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:15.316335 | orchestrator | 2025-09-04 01:06:15 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:15.324552 | orchestrator | 2025-09-04 01:06:15 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:15.326384 | orchestrator | 2025-09-04 01:06:15 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:15.326403 | orchestrator | 2025-09-04 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:18.352343 | orchestrator | 2025-09-04 01:06:18 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:18.353306 | orchestrator | 2025-09-04 01:06:18 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:18.353355 | orchestrator | 2025-09-04 01:06:18 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:18.353368 | orchestrator | 2025-09-04 01:06:18 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:18.353380 | orchestrator | 2025-09-04 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:21.381698 | orchestrator | 2025-09-04 01:06:21 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:21.382600 | orchestrator | 2025-09-04 01:06:21 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:21.383250 | orchestrator | 2025-09-04 01:06:21 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:21.383946 | orchestrator | 2025-09-04 01:06:21 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:21.383971 | orchestrator | 2025-09-04 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:24.407401 | orchestrator | 2025-09-04 01:06:24 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:24.407511 | orchestrator | 2025-09-04 01:06:24 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:24.407526 | orchestrator | 2025-09-04 01:06:24 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:24.407538 | orchestrator | 2025-09-04 01:06:24 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:24.407652 | orchestrator | 2025-09-04 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:27.447272 | orchestrator | 2025-09-04 01:06:27 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:27.449900 | orchestrator | 2025-09-04 01:06:27 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:27.451803 | orchestrator | 2025-09-04 01:06:27 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:27.453471 | orchestrator | 2025-09-04 01:06:27 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:27.453498 | orchestrator | 2025-09-04 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:30.503129 | orchestrator | 2025-09-04 01:06:30 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:30.503704 | orchestrator | 2025-09-04 01:06:30 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:30.505347 | orchestrator | 2025-09-04 01:06:30 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:30.506351 | orchestrator | 2025-09-04 01:06:30 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:30.506616 | orchestrator | 2025-09-04 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:33.560371 | orchestrator | 2025-09-04 01:06:33 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:33.561079 | orchestrator | 2025-09-04 01:06:33 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:33.562001 | orchestrator | 2025-09-04 01:06:33 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:33.562894 | orchestrator | 2025-09-04 01:06:33 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:33.562922 | orchestrator | 2025-09-04 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:36.604112 | orchestrator | 2025-09-04 01:06:36 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:36.604214 | orchestrator | 2025-09-04 01:06:36 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:36.604444 | orchestrator | 2025-09-04 01:06:36 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:36.605759 | orchestrator | 2025-09-04 01:06:36 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:36.605804 | orchestrator | 2025-09-04 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:39.643057 | orchestrator | 2025-09-04 01:06:39 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:39.644839 | orchestrator | 2025-09-04 01:06:39 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:39.647072 | orchestrator | 2025-09-04 01:06:39 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:39.650100 | orchestrator | 2025-09-04 01:06:39 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:39.650451 | orchestrator | 2025-09-04 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:42.685499 | orchestrator | 2025-09-04 01:06:42 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:42.687623 | orchestrator | 2025-09-04 01:06:42 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:42.690010 | orchestrator | 2025-09-04 01:06:42 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:42.691935 | orchestrator | 2025-09-04 01:06:42 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:42.691973 | orchestrator | 2025-09-04 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:45.718014 | orchestrator | 2025-09-04 01:06:45 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:45.718284 | orchestrator | 2025-09-04 01:06:45 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:45.720333 | orchestrator | 2025-09-04 01:06:45 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:45.721161 | orchestrator | 2025-09-04 01:06:45 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:45.721187 | orchestrator | 2025-09-04 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:48.751517 | orchestrator | 2025-09-04 01:06:48 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:48.751619 | orchestrator | 2025-09-04 01:06:48 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:48.752033 | orchestrator | 2025-09-04 01:06:48 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:48.753091 | orchestrator | 2025-09-04 01:06:48 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:48.753189 | orchestrator | 2025-09-04 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:51.776046 | orchestrator | 2025-09-04 01:06:51 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:51.779363 | orchestrator | 2025-09-04 01:06:51 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:51.779652 | orchestrator | 2025-09-04 01:06:51 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state STARTED 2025-09-04 01:06:51.780370 | orchestrator | 2025-09-04 01:06:51 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:51.780383 | orchestrator | 2025-09-04 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:54.826789 | orchestrator | 2025-09-04 01:06:54 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:54.830104 | orchestrator | 2025-09-04 01:06:54 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:54.831413 | orchestrator | 2025-09-04 01:06:54 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:06:54.832579 | orchestrator | 2025-09-04 01:06:54 | INFO  | Task 50a5c82f-8802-49b3-972a-41901acc44b4 is in state SUCCESS 2025-09-04 01:06:54.834505 | orchestrator | 2025-09-04 01:06:54 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:54.834529 | orchestrator | 2025-09-04 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:06:57.879054 | orchestrator | 2025-09-04 01:06:57 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:06:57.881639 | orchestrator | 2025-09-04 01:06:57 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:06:57.885140 | orchestrator | 2025-09-04 01:06:57 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:06:57.889171 | orchestrator | 2025-09-04 01:06:57 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:06:57.889667 | orchestrator | 2025-09-04 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:00.936892 | orchestrator | 2025-09-04 01:07:00 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:00.939696 | orchestrator | 2025-09-04 01:07:00 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:07:00.941870 | orchestrator | 2025-09-04 01:07:00 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:00.944487 | orchestrator | 2025-09-04 01:07:00 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:07:00.944512 | orchestrator | 2025-09-04 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:03.989442 | orchestrator | 2025-09-04 01:07:03 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:03.990003 | orchestrator | 2025-09-04 01:07:03 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state STARTED 2025-09-04 01:07:03.991710 | orchestrator | 2025-09-04 01:07:03 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:03.992635 | orchestrator | 2025-09-04 01:07:03 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:07:03.993037 | orchestrator | 2025-09-04 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:07.034900 | orchestrator | 2025-09-04 01:07:07 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:07.036464 | orchestrator | 2025-09-04 01:07:07 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:07.042076 | orchestrator | 2025-09-04 01:07:07 | INFO  | Task 9b0a480e-27a4-4191-8c57-a67d0e26347e is in state SUCCESS 2025-09-04 01:07:07.043761 | orchestrator | 2025-09-04 01:07:07.043881 | orchestrator | 2025-09-04 01:07:07.043895 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-04 01:07:07.043907 | orchestrator | 2025-09-04 01:07:07.043918 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-04 01:07:07.043931 | orchestrator | Thursday 04 September 2025 01:05:34 +0000 (0:00:00.111) 0:00:00.111 **** 2025-09-04 01:07:07.043942 | orchestrator | changed: [localhost] 2025-09-04 01:07:07.043955 | orchestrator | 2025-09-04 01:07:07.043966 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-04 01:07:07.044029 | orchestrator | Thursday 04 September 2025 01:05:35 +0000 (0:00:00.888) 0:00:01.000 **** 2025-09-04 01:07:07.044043 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-04 01:07:07.044054 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-04 01:07:07.044090 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2025-09-04 01:07:07.044105 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2024.2.initramfs"} 2025-09-04 01:07:07.044118 | orchestrator | 2025-09-04 01:07:07.044130 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:07:07.044207 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-04 01:07:07.044222 | orchestrator | 2025-09-04 01:07:07.044233 | orchestrator | 2025-09-04 01:07:07.044244 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:07:07.044255 | orchestrator | Thursday 04 September 2025 01:06:52 +0000 (0:01:17.092) 0:01:18.093 **** 2025-09-04 01:07:07.044266 | orchestrator | =============================================================================== 2025-09-04 01:07:07.044277 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 77.09s 2025-09-04 01:07:07.044288 | orchestrator | Ensure the destination directory exists --------------------------------- 0.89s 2025-09-04 01:07:07.044300 | orchestrator | 2025-09-04 01:07:07.044311 | orchestrator | 2025-09-04 01:07:07.044323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:07:07.044337 | orchestrator | 2025-09-04 01:07:07.044384 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:07:07.044398 | orchestrator | Thursday 04 September 2025 01:04:02 +0000 (0:00:00.304) 0:00:00.304 **** 2025-09-04 01:07:07.044412 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:07:07.044426 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:07:07.044439 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:07:07.044453 | orchestrator | 2025-09-04 01:07:07.044468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:07:07.044481 | orchestrator | Thursday 04 September 2025 01:04:03 +0000 (0:00:00.306) 0:00:00.611 **** 2025-09-04 01:07:07.044496 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-04 01:07:07.044510 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-04 01:07:07.044655 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-04 01:07:07.044670 | orchestrator | 2025-09-04 01:07:07.044710 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-04 01:07:07.044722 | orchestrator | 2025-09-04 01:07:07.044732 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-04 01:07:07.044743 | orchestrator | Thursday 04 September 2025 01:04:03 +0000 (0:00:00.449) 0:00:01.060 **** 2025-09-04 01:07:07.044754 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:07:07.044766 | orchestrator | 2025-09-04 01:07:07.044777 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-04 01:07:07.044787 | orchestrator | Thursday 04 September 2025 01:04:04 +0000 (0:00:00.591) 0:00:01.651 **** 2025-09-04 01:07:07.044798 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-04 01:07:07.044825 | orchestrator | 2025-09-04 01:07:07.044837 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-04 01:07:07.044848 | orchestrator | Thursday 04 September 2025 01:04:08 +0000 (0:00:03.974) 0:00:05.626 **** 2025-09-04 01:07:07.044858 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-04 01:07:07.044870 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-04 01:07:07.044881 | orchestrator | 2025-09-04 01:07:07.044902 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-04 01:07:07.044914 | orchestrator | Thursday 04 September 2025 01:04:14 +0000 (0:00:06.454) 0:00:12.081 **** 2025-09-04 01:07:07.044925 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:07:07.044936 | orchestrator | 2025-09-04 01:07:07.044947 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-04 01:07:07.044958 | orchestrator | Thursday 04 September 2025 01:04:17 +0000 (0:00:03.209) 0:00:15.290 **** 2025-09-04 01:07:07.044968 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:07:07.044979 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-04 01:07:07.044990 | orchestrator | 2025-09-04 01:07:07.045001 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-04 01:07:07.045012 | orchestrator | Thursday 04 September 2025 01:04:21 +0000 (0:00:04.033) 0:00:19.324 **** 2025-09-04 01:07:07.045023 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:07:07.045034 | orchestrator | 2025-09-04 01:07:07.045058 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-04 01:07:07.045069 | orchestrator | Thursday 04 September 2025 01:04:25 +0000 (0:00:03.370) 0:00:22.694 **** 2025-09-04 01:07:07.045081 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-04 01:07:07.045092 | orchestrator | 2025-09-04 01:07:07.045102 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-04 01:07:07.045114 | orchestrator | Thursday 04 September 2025 01:04:30 +0000 (0:00:04.831) 0:00:27.525 **** 2025-09-04 01:07:07.045128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045413 | orchestrator | 2025-09-04 01:07:07.045424 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-04 01:07:07.045436 | orchestrator | Thursday 04 September 2025 01:04:33 +0000 (0:00:03.028) 0:00:30.553 **** 2025-09-04 01:07:07.045447 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.045458 | orchestrator | 2025-09-04 01:07:07.045474 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-04 01:07:07.045486 | orchestrator | Thursday 04 September 2025 01:04:33 +0000 (0:00:00.132) 0:00:30.686 **** 2025-09-04 01:07:07.045497 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.045508 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.045519 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.045530 | orchestrator | 2025-09-04 01:07:07.045541 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-04 01:07:07.045551 | orchestrator | Thursday 04 September 2025 01:04:33 +0000 (0:00:00.480) 0:00:31.166 **** 2025-09-04 01:07:07.045562 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:07:07.045573 | orchestrator | 2025-09-04 01:07:07.045584 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-04 01:07:07.045595 | orchestrator | Thursday 04 September 2025 01:04:35 +0000 (0:00:01.411) 0:00:32.577 **** 2025-09-04 01:07:07.045607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.045656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.045930 | orchestrator | 2025-09-04 01:07:07.045941 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-04 01:07:07.045952 | orchestrator | Thursday 04 September 2025 01:04:42 +0000 (0:00:06.990) 0:00:39.568 **** 2025-09-04 01:07:07.045964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.045976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.045994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046762 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.046786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.046798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.046868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046923 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.046944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.046957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.046976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.046992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047027 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.047038 | orchestrator | 2025-09-04 01:07:07.047049 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-04 01:07:07.047082 | orchestrator | Thursday 04 September 2025 01:04:43 +0000 (0:00:01.102) 0:00:40.671 **** 2025-09-04 01:07:07.047101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.047120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.047132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047183 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.047200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.047219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.047230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047281 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.047299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.047317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.047328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.047384 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.047398 | orchestrator | 2025-09-04 01:07:07.047412 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-04 01:07:07.047425 | orchestrator | Thursday 04 September 2025 01:04:45 +0000 (0:00:02.145) 0:00:42.816 **** 2025-09-04 01:07:07.047446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047706 | orchestrator | 2025-09-04 01:07:07.047716 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-04 01:07:07.047727 | orchestrator | Thursday 04 September 2025 01:04:52 +0000 (0:00:06.849) 0:00:49.665 **** 2025-09-04 01:07:07.047739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.047787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.047994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048022 | orchestrator | 2025-09-04 01:07:07.048033 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-04 01:07:07.048044 | orchestrator | Thursday 04 September 2025 01:05:13 +0000 (0:00:21.229) 0:01:10.895 **** 2025-09-04 01:07:07.048055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-04 01:07:07.048073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-04 01:07:07.048084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-04 01:07:07.048095 | orchestrator | 2025-09-04 01:07:07.048105 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-04 01:07:07.048116 | orchestrator | Thursday 04 September 2025 01:05:20 +0000 (0:00:06.958) 0:01:17.854 **** 2025-09-04 01:07:07.048127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-04 01:07:07.048138 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-04 01:07:07.048148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-04 01:07:07.048159 | orchestrator | 2025-09-04 01:07:07.048170 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-04 01:07:07.048181 | orchestrator | Thursday 04 September 2025 01:05:24 +0000 (0:00:03.662) 0:01:21.516 **** 2025-09-04 01:07:07.048198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048458 | orchestrator | 2025-09-04 01:07:07.048469 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-04 01:07:07.048480 | orchestrator | Thursday 04 September 2025 01:05:26 +0000 (0:00:02.587) 0:01:24.104 **** 2025-09-04 01:07:07.048492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.048825 | orchestrator | 2025-09-04 01:07:07.048837 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-04 01:07:07.048848 | orchestrator | Thursday 04 September 2025 01:05:29 +0000 (0:00:02.440) 0:01:26.545 **** 2025-09-04 01:07:07.048859 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.048870 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.048881 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.048892 | orchestrator | 2025-09-04 01:07:07.048903 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-04 01:07:07.048913 | orchestrator | Thursday 04 September 2025 01:05:29 +0000 (0:00:00.395) 0:01:26.941 **** 2025-09-04 01:07:07.048925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.048943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.048956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.048986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049013 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.049025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.049042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.049054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049111 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.049122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-04 01:07:07.049134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-04 01:07:07.049151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:07:07.049204 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.049214 | orchestrator | 2025-09-04 01:07:07.049230 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-04 01:07:07.049241 | orchestrator | Thursday 04 September 2025 01:05:30 +0000 (0:00:01.449) 0:01:28.390 **** 2025-09-04 01:07:07.049253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.049270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.049282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-04 01:07:07.049304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:07:07.049519 | orchestrator | 2025-09-04 01:07:07.049530 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-04 01:07:07.049541 | orchestrator | Thursday 04 September 2025 01:05:35 +0000 (0:00:04.661) 0:01:33.051 **** 2025-09-04 01:07:07.049552 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:07.049562 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:07.049573 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:07.049584 | orchestrator | 2025-09-04 01:07:07.049595 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-04 01:07:07.049606 | orchestrator | Thursday 04 September 2025 01:05:35 +0000 (0:00:00.275) 0:01:33.327 **** 2025-09-04 01:07:07.049616 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-04 01:07:07.049627 | orchestrator | 2025-09-04 01:07:07.049642 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-04 01:07:07.049653 | orchestrator | Thursday 04 September 2025 01:05:38 +0000 (0:00:02.270) 0:01:35.598 **** 2025-09-04 01:07:07.049665 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 01:07:07.049676 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-04 01:07:07.049686 | orchestrator | 2025-09-04 01:07:07.049697 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-04 01:07:07.049708 | orchestrator | Thursday 04 September 2025 01:05:40 +0000 (0:00:02.258) 0:01:37.857 **** 2025-09-04 01:07:07.049719 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.049730 | orchestrator | 2025-09-04 01:07:07.049741 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-04 01:07:07.049751 | orchestrator | Thursday 04 September 2025 01:05:57 +0000 (0:00:16.984) 0:01:54.841 **** 2025-09-04 01:07:07.049762 | orchestrator | 2025-09-04 01:07:07.049773 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-04 01:07:07.049783 | orchestrator | Thursday 04 September 2025 01:05:57 +0000 (0:00:00.398) 0:01:55.240 **** 2025-09-04 01:07:07.049815 | orchestrator | 2025-09-04 01:07:07.049827 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-04 01:07:07.049837 | orchestrator | Thursday 04 September 2025 01:05:57 +0000 (0:00:00.139) 0:01:55.379 **** 2025-09-04 01:07:07.049848 | orchestrator | 2025-09-04 01:07:07.049859 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-04 01:07:07.049870 | orchestrator | Thursday 04 September 2025 01:05:58 +0000 (0:00:00.156) 0:01:55.536 **** 2025-09-04 01:07:07.049881 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.049891 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.049902 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.049913 | orchestrator | 2025-09-04 01:07:07.049924 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-04 01:07:07.049934 | orchestrator | Thursday 04 September 2025 01:06:13 +0000 (0:00:14.950) 0:02:10.486 **** 2025-09-04 01:07:07.049945 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.049956 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.049967 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.049978 | orchestrator | 2025-09-04 01:07:07.049988 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-04 01:07:07.049999 | orchestrator | Thursday 04 September 2025 01:06:21 +0000 (0:00:08.345) 0:02:18.832 **** 2025-09-04 01:07:07.050010 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.050059 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.050071 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.050082 | orchestrator | 2025-09-04 01:07:07.050093 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-04 01:07:07.050104 | orchestrator | Thursday 04 September 2025 01:06:33 +0000 (0:00:11.614) 0:02:30.446 **** 2025-09-04 01:07:07.050114 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.050125 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.050136 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.050146 | orchestrator | 2025-09-04 01:07:07.050157 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-04 01:07:07.050168 | orchestrator | Thursday 04 September 2025 01:06:39 +0000 (0:00:06.596) 0:02:37.043 **** 2025-09-04 01:07:07.050178 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.050189 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.050200 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.050211 | orchestrator | 2025-09-04 01:07:07.050221 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-04 01:07:07.050232 | orchestrator | Thursday 04 September 2025 01:06:48 +0000 (0:00:08.684) 0:02:45.727 **** 2025-09-04 01:07:07.050243 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.050253 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:07.050264 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:07.050275 | orchestrator | 2025-09-04 01:07:07.050285 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-04 01:07:07.050296 | orchestrator | Thursday 04 September 2025 01:06:57 +0000 (0:00:08.765) 0:02:54.492 **** 2025-09-04 01:07:07.050306 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:07.050317 | orchestrator | 2025-09-04 01:07:07.050328 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:07:07.050339 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:07:07.050352 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:07:07.050363 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:07:07.050374 | orchestrator | 2025-09-04 01:07:07.050384 | orchestrator | 2025-09-04 01:07:07.050395 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:07:07.050412 | orchestrator | Thursday 04 September 2025 01:07:04 +0000 (0:00:07.632) 0:03:02.125 **** 2025-09-04 01:07:07.050423 | orchestrator | =============================================================================== 2025-09-04 01:07:07.050434 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.23s 2025-09-04 01:07:07.050445 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.98s 2025-09-04 01:07:07.050455 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.95s 2025-09-04 01:07:07.050466 | orchestrator | designate : Restart designate-central container ------------------------ 11.61s 2025-09-04 01:07:07.050476 | orchestrator | designate : Restart designate-worker container -------------------------- 8.76s 2025-09-04 01:07:07.050487 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.69s 2025-09-04 01:07:07.050502 | orchestrator | designate : Restart designate-api container ----------------------------- 8.35s 2025-09-04 01:07:07.050513 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.63s 2025-09-04 01:07:07.050524 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.99s 2025-09-04 01:07:07.050534 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.96s 2025-09-04 01:07:07.050545 | orchestrator | designate : Copying over config.json files for services ----------------- 6.85s 2025-09-04 01:07:07.050556 | orchestrator | designate : Restart designate-producer container ------------------------ 6.60s 2025-09-04 01:07:07.050566 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.45s 2025-09-04 01:07:07.050577 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.83s 2025-09-04 01:07:07.050588 | orchestrator | designate : Check designate containers ---------------------------------- 4.66s 2025-09-04 01:07:07.050598 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.03s 2025-09-04 01:07:07.050609 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.97s 2025-09-04 01:07:07.050620 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.66s 2025-09-04 01:07:07.050630 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.37s 2025-09-04 01:07:07.050641 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.21s 2025-09-04 01:07:07.050651 | orchestrator | 2025-09-04 01:07:07 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:07.050662 | orchestrator | 2025-09-04 01:07:07 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:07:07.050673 | orchestrator | 2025-09-04 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:10.103524 | orchestrator | 2025-09-04 01:07:10 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:10.104715 | orchestrator | 2025-09-04 01:07:10 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:10.107295 | orchestrator | 2025-09-04 01:07:10 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:10.109054 | orchestrator | 2025-09-04 01:07:10 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:07:10.109116 | orchestrator | 2025-09-04 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:13.153777 | orchestrator | 2025-09-04 01:07:13 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:13.156563 | orchestrator | 2025-09-04 01:07:13 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:13.161045 | orchestrator | 2025-09-04 01:07:13 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:13.164547 | orchestrator | 2025-09-04 01:07:13 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state STARTED 2025-09-04 01:07:13.165492 | orchestrator | 2025-09-04 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:16.217397 | orchestrator | 2025-09-04 01:07:16 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:16.218644 | orchestrator | 2025-09-04 01:07:16 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:16.220497 | orchestrator | 2025-09-04 01:07:16 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:16.221896 | orchestrator | 2025-09-04 01:07:16 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:16.225040 | orchestrator | 2025-09-04 01:07:16.225065 | orchestrator | 2025-09-04 01:07:16 | INFO  | Task 1783dd8d-c048-477b-b516-e7d608322a60 is in state SUCCESS 2025-09-04 01:07:16.226588 | orchestrator | 2025-09-04 01:07:16.226618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:07:16.226631 | orchestrator | 2025-09-04 01:07:16.226642 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:07:16.226653 | orchestrator | Thursday 04 September 2025 01:02:58 +0000 (0:00:00.270) 0:00:00.271 **** 2025-09-04 01:07:16.226665 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:07:16.226677 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:07:16.226688 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:07:16.226698 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:07:16.226709 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:07:16.226720 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:07:16.226731 | orchestrator | 2025-09-04 01:07:16.226741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:07:16.226752 | orchestrator | Thursday 04 September 2025 01:02:59 +0000 (0:00:00.696) 0:00:00.967 **** 2025-09-04 01:07:16.226763 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-04 01:07:16.226775 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-04 01:07:16.226786 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-04 01:07:16.226815 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-04 01:07:16.226828 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-04 01:07:16.226853 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-04 01:07:16.226865 | orchestrator | 2025-09-04 01:07:16.226877 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-04 01:07:16.226888 | orchestrator | 2025-09-04 01:07:16.226899 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-04 01:07:16.226910 | orchestrator | Thursday 04 September 2025 01:02:59 +0000 (0:00:00.574) 0:00:01.542 **** 2025-09-04 01:07:16.226923 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:07:16.226934 | orchestrator | 2025-09-04 01:07:16.227432 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-04 01:07:16.227448 | orchestrator | Thursday 04 September 2025 01:03:00 +0000 (0:00:01.195) 0:00:02.738 **** 2025-09-04 01:07:16.227459 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:07:16.227470 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:07:16.227481 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:07:16.227491 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:07:16.227502 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:07:16.227512 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:07:16.227523 | orchestrator | 2025-09-04 01:07:16.227534 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-04 01:07:16.227546 | orchestrator | Thursday 04 September 2025 01:03:02 +0000 (0:00:01.384) 0:00:04.123 **** 2025-09-04 01:07:16.227556 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:07:16.227586 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:07:16.227597 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:07:16.227608 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:07:16.227619 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:07:16.227629 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:07:16.227640 | orchestrator | 2025-09-04 01:07:16.227651 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-04 01:07:16.227662 | orchestrator | Thursday 04 September 2025 01:03:03 +0000 (0:00:01.035) 0:00:05.158 **** 2025-09-04 01:07:16.227672 | orchestrator | ok: [testbed-node-0] => { 2025-09-04 01:07:16.227683 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227694 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227705 | orchestrator | } 2025-09-04 01:07:16.227716 | orchestrator | ok: [testbed-node-1] => { 2025-09-04 01:07:16.227743 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227755 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227766 | orchestrator | } 2025-09-04 01:07:16.227777 | orchestrator | ok: [testbed-node-2] => { 2025-09-04 01:07:16.227787 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227851 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227864 | orchestrator | } 2025-09-04 01:07:16.227875 | orchestrator | ok: [testbed-node-3] => { 2025-09-04 01:07:16.227885 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227896 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227907 | orchestrator | } 2025-09-04 01:07:16.227917 | orchestrator | ok: [testbed-node-4] => { 2025-09-04 01:07:16.227928 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227938 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227949 | orchestrator | } 2025-09-04 01:07:16.227959 | orchestrator | ok: [testbed-node-5] => { 2025-09-04 01:07:16.227970 | orchestrator |  "changed": false, 2025-09-04 01:07:16.227980 | orchestrator |  "msg": "All assertions passed" 2025-09-04 01:07:16.227991 | orchestrator | } 2025-09-04 01:07:16.228002 | orchestrator | 2025-09-04 01:07:16.228012 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-04 01:07:16.228023 | orchestrator | Thursday 04 September 2025 01:03:03 +0000 (0:00:00.720) 0:00:05.879 **** 2025-09-04 01:07:16.228034 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.228045 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.228055 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.228066 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.228077 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.228087 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.228098 | orchestrator | 2025-09-04 01:07:16.228108 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-04 01:07:16.228119 | orchestrator | Thursday 04 September 2025 01:03:04 +0000 (0:00:00.588) 0:00:06.467 **** 2025-09-04 01:07:16.228130 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-04 01:07:16.228140 | orchestrator | 2025-09-04 01:07:16.228151 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-04 01:07:16.228161 | orchestrator | Thursday 04 September 2025 01:03:08 +0000 (0:00:03.507) 0:00:09.974 **** 2025-09-04 01:07:16.228172 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-04 01:07:16.228184 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-04 01:07:16.228195 | orchestrator | 2025-09-04 01:07:16.228249 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-04 01:07:16.228261 | orchestrator | Thursday 04 September 2025 01:03:14 +0000 (0:00:06.269) 0:00:16.243 **** 2025-09-04 01:07:16.228272 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:07:16.228283 | orchestrator | 2025-09-04 01:07:16.228294 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-04 01:07:16.228305 | orchestrator | Thursday 04 September 2025 01:03:17 +0000 (0:00:03.268) 0:00:19.512 **** 2025-09-04 01:07:16.228325 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:07:16.228335 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-04 01:07:16.228346 | orchestrator | 2025-09-04 01:07:16.228357 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-04 01:07:16.228367 | orchestrator | Thursday 04 September 2025 01:03:21 +0000 (0:00:04.115) 0:00:23.628 **** 2025-09-04 01:07:16.228378 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:07:16.228389 | orchestrator | 2025-09-04 01:07:16.228400 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-04 01:07:16.228410 | orchestrator | Thursday 04 September 2025 01:03:25 +0000 (0:00:03.343) 0:00:26.972 **** 2025-09-04 01:07:16.228421 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-04 01:07:16.228438 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-04 01:07:16.228450 | orchestrator | 2025-09-04 01:07:16.228460 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-04 01:07:16.228471 | orchestrator | Thursday 04 September 2025 01:03:33 +0000 (0:00:07.923) 0:00:34.895 **** 2025-09-04 01:07:16.228482 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.228493 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.228503 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.228514 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.228525 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.228536 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.228546 | orchestrator | 2025-09-04 01:07:16.228557 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-04 01:07:16.228568 | orchestrator | Thursday 04 September 2025 01:03:33 +0000 (0:00:00.750) 0:00:35.646 **** 2025-09-04 01:07:16.228578 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.228589 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.228600 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.228611 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.228622 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.228632 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.228643 | orchestrator | 2025-09-04 01:07:16.228654 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-04 01:07:16.228665 | orchestrator | Thursday 04 September 2025 01:03:35 +0000 (0:00:01.926) 0:00:37.572 **** 2025-09-04 01:07:16.228676 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:07:16.228687 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:07:16.228697 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:07:16.228708 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:07:16.228719 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:07:16.228729 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:07:16.228740 | orchestrator | 2025-09-04 01:07:16.228751 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-04 01:07:16.228761 | orchestrator | Thursday 04 September 2025 01:03:37 +0000 (0:00:01.974) 0:00:39.547 **** 2025-09-04 01:07:16.228772 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.228783 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.228794 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.228822 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.228833 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.228844 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.228855 | orchestrator | 2025-09-04 01:07:16.228866 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-04 01:07:16.228877 | orchestrator | Thursday 04 September 2025 01:03:39 +0000 (0:00:02.060) 0:00:41.608 **** 2025-09-04 01:07:16.228891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.228949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.228968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.228980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.228992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.229003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.229021 | orchestrator | 2025-09-04 01:07:16.229032 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-04 01:07:16.229043 | orchestrator | Thursday 04 September 2025 01:03:42 +0000 (0:00:02.818) 0:00:44.426 **** 2025-09-04 01:07:16.229054 | orchestrator | [WARNING]: Skipped 2025-09-04 01:07:16.229065 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-04 01:07:16.229076 | orchestrator | due to this access issue: 2025-09-04 01:07:16.229086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-04 01:07:16.229097 | orchestrator | a directory 2025-09-04 01:07:16.229108 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:07:16.229119 | orchestrator | 2025-09-04 01:07:16.229129 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-04 01:07:16.229169 | orchestrator | Thursday 04 September 2025 01:03:43 +0000 (0:00:00.883) 0:00:45.310 **** 2025-09-04 01:07:16.229182 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:07:16.229194 | orchestrator | 2025-09-04 01:07:16.229205 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-04 01:07:16.229216 | orchestrator | Thursday 04 September 2025 01:03:44 +0000 (0:00:01.197) 0:00:46.507 **** 2025-09-04 01:07:16.229232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.229244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.229256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.229274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.229315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.229333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.229344 | orchestrator | 2025-09-04 01:07:16.229355 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-04 01:07:16.229366 | orchestrator | Thursday 04 September 2025 01:03:47 +0000 (0:00:02.830) 0:00:49.337 **** 2025-09-04 01:07:16.229378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229395 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.229407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229418 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.229429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229440 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.229484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229497 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.229513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229524 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.229535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229552 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.229563 | orchestrator | 2025-09-04 01:07:16.229574 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-04 01:07:16.229585 | orchestrator | Thursday 04 September 2025 01:03:51 +0000 (0:00:03.744) 0:00:53.082 **** 2025-09-04 01:07:16.229596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229607 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.229625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229636 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.229651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229663 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.229674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.229696 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.229707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229718 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.229728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.229740 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.229750 | orchestrator | 2025-09-04 01:07:16.229761 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-04 01:07:16.229772 | orchestrator | Thursday 04 September 2025 01:03:53 +0000 (0:00:02.532) 0:00:55.614 **** 2025-09-04 01:07:16.229782 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.229793 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.229832 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.229843 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.229854 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.229864 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.229875 | orchestrator | 2025-09-04 01:07:16.229886 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-04 01:07:16.229905 | orchestrator | Thursday 04 September 2025 01:03:55 +0000 (0:00:02.241) 0:00:57.856 **** 2025-09-04 01:07:16.229916 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.229927 | orchestrator | 2025-09-04 01:07:16.229937 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-04 01:07:16.229948 | orchestrator | Thursday 04 September 2025 01:03:56 +0000 (0:00:00.159) 0:00:58.015 **** 2025-09-04 01:07:16.229959 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.229970 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.229981 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.229991 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.230002 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.230012 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.230068 | orchestrator | 2025-09-04 01:07:16.230079 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-04 01:07:16.230098 | orchestrator | Thursday 04 September 2025 01:03:56 +0000 (0:00:00.749) 0:00:58.765 **** 2025-09-04 01:07:16.230114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230126 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.230138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230149 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.230161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230172 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.230190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230202 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.230213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230230 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.230246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230258 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.230269 | orchestrator | 2025-09-04 01:07:16.230279 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-04 01:07:16.230290 | orchestrator | Thursday 04 September 2025 01:03:59 +0000 (0:00:02.578) 0:01:01.343 **** 2025-09-04 01:07:16.230302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230385 | orchestrator | 2025-09-04 01:07:16.230396 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-04 01:07:16.230407 | orchestrator | Thursday 04 September 2025 01:04:03 +0000 (0:00:04.505) 0:01:05.849 **** 2025-09-04 01:07:16.230419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.230532 | orchestrator | 2025-09-04 01:07:16.230543 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-04 01:07:16.230559 | orchestrator | Thursday 04 September 2025 01:04:11 +0000 (0:00:07.441) 0:01:13.291 **** 2025-09-04 01:07:16.230577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230589 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.230604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230616 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.230627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.230638 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.230649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230660 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.230672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230688 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.230705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230717 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.230727 | orchestrator | 2025-09-04 01:07:16.230738 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-04 01:07:16.230749 | orchestrator | Thursday 04 September 2025 01:04:14 +0000 (0:00:03.142) 0:01:16.433 **** 2025-09-04 01:07:16.230763 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.230774 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.230785 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:16.230812 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.230824 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:16.230834 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:16.230845 | orchestrator | 2025-09-04 01:07:16.230855 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-04 01:07:16.230866 | orchestrator | Thursday 04 September 2025 01:04:17 +0000 (0:00:02.907) 0:01:19.341 **** 2025-09-04 01:07:16.230877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230889 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.230900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230911 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.230928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.230940 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.230957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.230997 | orchestrator | 2025-09-04 01:07:16.231008 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-04 01:07:16.231019 | orchestrator | Thursday 04 September 2025 01:04:21 +0000 (0:00:03.797) 0:01:23.139 **** 2025-09-04 01:07:16.231029 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231040 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231051 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231061 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231082 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231093 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231104 | orchestrator | 2025-09-04 01:07:16.231115 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-04 01:07:16.231125 | orchestrator | Thursday 04 September 2025 01:04:23 +0000 (0:00:02.602) 0:01:25.742 **** 2025-09-04 01:07:16.231136 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231147 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231157 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231168 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231178 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231189 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231199 | orchestrator | 2025-09-04 01:07:16.231210 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-04 01:07:16.231221 | orchestrator | Thursday 04 September 2025 01:04:27 +0000 (0:00:03.684) 0:01:29.427 **** 2025-09-04 01:07:16.231232 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231242 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231253 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231263 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231274 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231285 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231295 | orchestrator | 2025-09-04 01:07:16.231306 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-04 01:07:16.231317 | orchestrator | Thursday 04 September 2025 01:04:30 +0000 (0:00:02.869) 0:01:32.297 **** 2025-09-04 01:07:16.231328 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231338 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231349 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231359 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231370 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231380 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231391 | orchestrator | 2025-09-04 01:07:16.231401 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-04 01:07:16.231412 | orchestrator | Thursday 04 September 2025 01:04:33 +0000 (0:00:02.668) 0:01:34.965 **** 2025-09-04 01:07:16.231423 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231434 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231444 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231455 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231471 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231482 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231493 | orchestrator | 2025-09-04 01:07:16.231504 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-04 01:07:16.231515 | orchestrator | Thursday 04 September 2025 01:04:35 +0000 (0:00:02.899) 0:01:37.865 **** 2025-09-04 01:07:16.231526 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231536 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231547 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231557 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231568 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231578 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231589 | orchestrator | 2025-09-04 01:07:16.231600 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-04 01:07:16.231611 | orchestrator | Thursday 04 September 2025 01:04:39 +0000 (0:00:03.167) 0:01:41.033 **** 2025-09-04 01:07:16.231621 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231632 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231643 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231653 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231668 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231686 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231697 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231708 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231718 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231729 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231740 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-04 01:07:16.231750 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231761 | orchestrator | 2025-09-04 01:07:16.231771 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-04 01:07:16.231782 | orchestrator | Thursday 04 September 2025 01:04:41 +0000 (0:00:02.337) 0:01:43.370 **** 2025-09-04 01:07:16.231794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.231821 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.231832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.231843 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.231861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.231873 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.231888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.231905 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.231917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.231928 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.231938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.231949 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.231960 | orchestrator | 2025-09-04 01:07:16.231971 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-04 01:07:16.231981 | orchestrator | Thursday 04 September 2025 01:04:43 +0000 (0:00:02.193) 0:01:45.564 **** 2025-09-04 01:07:16.231993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.232004 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.232037 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.232063 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.232085 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.232108 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.232129 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232140 | orchestrator | 2025-09-04 01:07:16.232151 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-04 01:07:16.232168 | orchestrator | Thursday 04 September 2025 01:04:47 +0000 (0:00:03.573) 0:01:49.137 **** 2025-09-04 01:07:16.232179 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232194 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232205 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232216 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232226 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232237 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232248 | orchestrator | 2025-09-04 01:07:16.232258 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-04 01:07:16.232269 | orchestrator | Thursday 04 September 2025 01:04:48 +0000 (0:00:01.710) 0:01:50.847 **** 2025-09-04 01:07:16.232280 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232291 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232301 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232312 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:07:16.232322 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:07:16.232332 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:07:16.232343 | orchestrator | 2025-09-04 01:07:16.232354 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-04 01:07:16.232364 | orchestrator | Thursday 04 September 2025 01:04:52 +0000 (0:00:03.279) 0:01:54.126 **** 2025-09-04 01:07:16.232375 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232386 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232396 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232407 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232417 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232432 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232443 | orchestrator | 2025-09-04 01:07:16.232454 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-04 01:07:16.232465 | orchestrator | Thursday 04 September 2025 01:04:55 +0000 (0:00:03.037) 0:01:57.164 **** 2025-09-04 01:07:16.232475 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232486 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232496 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232507 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232517 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232528 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232538 | orchestrator | 2025-09-04 01:07:16.232549 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-04 01:07:16.232560 | orchestrator | Thursday 04 September 2025 01:04:58 +0000 (0:00:02.959) 0:02:00.124 **** 2025-09-04 01:07:16.232571 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232581 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232592 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232602 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232613 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232623 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232634 | orchestrator | 2025-09-04 01:07:16.232645 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-04 01:07:16.232655 | orchestrator | Thursday 04 September 2025 01:05:01 +0000 (0:00:03.715) 0:02:03.839 **** 2025-09-04 01:07:16.232666 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232677 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232687 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232698 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232709 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232719 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232730 | orchestrator | 2025-09-04 01:07:16.232740 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-04 01:07:16.232751 | orchestrator | Thursday 04 September 2025 01:05:04 +0000 (0:00:02.503) 0:02:06.343 **** 2025-09-04 01:07:16.232767 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232778 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232789 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232815 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232826 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232837 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232847 | orchestrator | 2025-09-04 01:07:16.232858 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-04 01:07:16.232869 | orchestrator | Thursday 04 September 2025 01:05:06 +0000 (0:00:02.455) 0:02:08.798 **** 2025-09-04 01:07:16.232880 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.232891 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232901 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232912 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.232922 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.232933 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.232944 | orchestrator | 2025-09-04 01:07:16.232954 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-04 01:07:16.232965 | orchestrator | Thursday 04 September 2025 01:05:09 +0000 (0:00:02.734) 0:02:11.533 **** 2025-09-04 01:07:16.232976 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.232986 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.232997 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.233008 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.233018 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.233029 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.233039 | orchestrator | 2025-09-04 01:07:16.233050 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-04 01:07:16.233061 | orchestrator | Thursday 04 September 2025 01:05:12 +0000 (0:00:02.651) 0:02:14.184 **** 2025-09-04 01:07:16.233072 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233083 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.233093 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233104 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.233115 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233126 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.233137 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233147 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.233163 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233174 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.233185 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-04 01:07:16.233196 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.233207 | orchestrator | 2025-09-04 01:07:16.233218 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-04 01:07:16.233228 | orchestrator | Thursday 04 September 2025 01:05:15 +0000 (0:00:03.395) 0:02:17.579 **** 2025-09-04 01:07:16.233244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.233262 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.233273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.233285 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.233295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.233307 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.233317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.233329 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.233346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-04 01:07:16.233357 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.233373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-04 01:07:16.233389 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.233400 | orchestrator | 2025-09-04 01:07:16.233410 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-04 01:07:16.233421 | orchestrator | Thursday 04 September 2025 01:05:19 +0000 (0:00:03.377) 0:02:20.957 **** 2025-09-04 01:07:16.233432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.233444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.233461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-04 01:07:16.233477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.233497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.233509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-04 01:07:16.233520 | orchestrator | 2025-09-04 01:07:16.233531 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-04 01:07:16.233542 | orchestrator | Thursday 04 September 2025 01:05:23 +0000 (0:00:04.148) 0:02:25.105 **** 2025-09-04 01:07:16.233553 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:07:16.233564 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:07:16.233574 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:07:16.233585 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:07:16.233595 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:07:16.233606 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:07:16.233617 | orchestrator | 2025-09-04 01:07:16.233627 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-04 01:07:16.233638 | orchestrator | Thursday 04 September 2025 01:05:23 +0000 (0:00:00.414) 0:02:25.520 **** 2025-09-04 01:07:16.233648 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:16.233659 | orchestrator | 2025-09-04 01:07:16.233670 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-04 01:07:16.233680 | orchestrator | Thursday 04 September 2025 01:05:25 +0000 (0:00:02.139) 0:02:27.659 **** 2025-09-04 01:07:16.233691 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:16.233702 | orchestrator | 2025-09-04 01:07:16.233712 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-04 01:07:16.233723 | orchestrator | Thursday 04 September 2025 01:05:28 +0000 (0:00:02.249) 0:02:29.908 **** 2025-09-04 01:07:16.233733 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:16.233744 | orchestrator | 2025-09-04 01:07:16.233755 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233765 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:42.148) 0:03:12.057 **** 2025-09-04 01:07:16.233776 | orchestrator | 2025-09-04 01:07:16.233786 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233838 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.058) 0:03:12.115 **** 2025-09-04 01:07:16.233850 | orchestrator | 2025-09-04 01:07:16.233869 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233880 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.167) 0:03:12.283 **** 2025-09-04 01:07:16.233891 | orchestrator | 2025-09-04 01:07:16.233901 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233912 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.056) 0:03:12.340 **** 2025-09-04 01:07:16.233923 | orchestrator | 2025-09-04 01:07:16.233939 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233950 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.099) 0:03:12.439 **** 2025-09-04 01:07:16.233961 | orchestrator | 2025-09-04 01:07:16.233971 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-04 01:07:16.233982 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.120) 0:03:12.560 **** 2025-09-04 01:07:16.233993 | orchestrator | 2025-09-04 01:07:16.234004 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-04 01:07:16.234052 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:00.133) 0:03:12.693 **** 2025-09-04 01:07:16.234066 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:07:16.234076 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:07:16.234087 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:07:16.234098 | orchestrator | 2025-09-04 01:07:16.234109 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-04 01:07:16.234120 | orchestrator | Thursday 04 September 2025 01:06:42 +0000 (0:00:32.011) 0:03:44.705 **** 2025-09-04 01:07:16.234130 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:07:16.234141 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:07:16.234152 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:07:16.234162 | orchestrator | 2025-09-04 01:07:16.234173 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:07:16.234189 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 01:07:16.234201 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-04 01:07:16.234212 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-04 01:07:16.234223 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 01:07:16.234234 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 01:07:16.234244 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-04 01:07:16.234255 | orchestrator | 2025-09-04 01:07:16.234266 | orchestrator | 2025-09-04 01:07:16.234277 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:07:16.234287 | orchestrator | Thursday 04 September 2025 01:07:12 +0000 (0:00:30.097) 0:04:14.803 **** 2025-09-04 01:07:16.234298 | orchestrator | =============================================================================== 2025-09-04 01:07:16.234309 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.15s 2025-09-04 01:07:16.234319 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.01s 2025-09-04 01:07:16.234329 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 30.10s 2025-09-04 01:07:16.234338 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.92s 2025-09-04 01:07:16.234348 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.44s 2025-09-04 01:07:16.234357 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.27s 2025-09-04 01:07:16.234373 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.51s 2025-09-04 01:07:16.234383 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.15s 2025-09-04 01:07:16.234392 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.12s 2025-09-04 01:07:16.234402 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.80s 2025-09-04 01:07:16.234411 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.74s 2025-09-04 01:07:16.234421 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.72s 2025-09-04 01:07:16.234430 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.68s 2025-09-04 01:07:16.234439 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.57s 2025-09-04 01:07:16.234449 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2025-09-04 01:07:16.234458 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.40s 2025-09-04 01:07:16.234468 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.38s 2025-09-04 01:07:16.234477 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.34s 2025-09-04 01:07:16.234487 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.28s 2025-09-04 01:07:16.234496 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.27s 2025-09-04 01:07:16.234506 | orchestrator | 2025-09-04 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:19.273252 | orchestrator | 2025-09-04 01:07:19 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:19.274880 | orchestrator | 2025-09-04 01:07:19 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:19.275848 | orchestrator | 2025-09-04 01:07:19 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:19.277300 | orchestrator | 2025-09-04 01:07:19 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:19.277555 | orchestrator | 2025-09-04 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:22.325088 | orchestrator | 2025-09-04 01:07:22 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:22.327152 | orchestrator | 2025-09-04 01:07:22 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:22.328976 | orchestrator | 2025-09-04 01:07:22 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:22.330254 | orchestrator | 2025-09-04 01:07:22 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:22.330462 | orchestrator | 2025-09-04 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:25.365282 | orchestrator | 2025-09-04 01:07:25 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:25.365782 | orchestrator | 2025-09-04 01:07:25 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:25.367029 | orchestrator | 2025-09-04 01:07:25 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:25.368138 | orchestrator | 2025-09-04 01:07:25 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:25.368165 | orchestrator | 2025-09-04 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:28.404310 | orchestrator | 2025-09-04 01:07:28 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:28.404625 | orchestrator | 2025-09-04 01:07:28 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:28.405480 | orchestrator | 2025-09-04 01:07:28 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:28.407312 | orchestrator | 2025-09-04 01:07:28 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:28.407337 | orchestrator | 2025-09-04 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:31.445041 | orchestrator | 2025-09-04 01:07:31 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:31.445154 | orchestrator | 2025-09-04 01:07:31 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:31.446014 | orchestrator | 2025-09-04 01:07:31 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:31.446885 | orchestrator | 2025-09-04 01:07:31 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:31.446908 | orchestrator | 2025-09-04 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:34.481182 | orchestrator | 2025-09-04 01:07:34 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:34.482221 | orchestrator | 2025-09-04 01:07:34 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:34.482970 | orchestrator | 2025-09-04 01:07:34 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:34.484177 | orchestrator | 2025-09-04 01:07:34 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:34.484210 | orchestrator | 2025-09-04 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:37.536117 | orchestrator | 2025-09-04 01:07:37 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:37.537994 | orchestrator | 2025-09-04 01:07:37 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:37.539673 | orchestrator | 2025-09-04 01:07:37 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:37.541342 | orchestrator | 2025-09-04 01:07:37 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:37.541364 | orchestrator | 2025-09-04 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:40.585312 | orchestrator | 2025-09-04 01:07:40 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:40.586394 | orchestrator | 2025-09-04 01:07:40 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:40.588373 | orchestrator | 2025-09-04 01:07:40 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:40.589916 | orchestrator | 2025-09-04 01:07:40 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:40.589940 | orchestrator | 2025-09-04 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:43.634583 | orchestrator | 2025-09-04 01:07:43 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:43.636856 | orchestrator | 2025-09-04 01:07:43 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:43.638295 | orchestrator | 2025-09-04 01:07:43 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:43.639874 | orchestrator | 2025-09-04 01:07:43 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:43.640062 | orchestrator | 2025-09-04 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:46.684037 | orchestrator | 2025-09-04 01:07:46 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:46.685470 | orchestrator | 2025-09-04 01:07:46 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:46.688069 | orchestrator | 2025-09-04 01:07:46 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:46.690341 | orchestrator | 2025-09-04 01:07:46 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:46.690368 | orchestrator | 2025-09-04 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:49.726944 | orchestrator | 2025-09-04 01:07:49 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:49.727855 | orchestrator | 2025-09-04 01:07:49 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:49.728923 | orchestrator | 2025-09-04 01:07:49 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:49.730210 | orchestrator | 2025-09-04 01:07:49 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:49.730285 | orchestrator | 2025-09-04 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:52.774909 | orchestrator | 2025-09-04 01:07:52 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:52.779898 | orchestrator | 2025-09-04 01:07:52 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:52.782128 | orchestrator | 2025-09-04 01:07:52 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:52.783966 | orchestrator | 2025-09-04 01:07:52 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:52.784202 | orchestrator | 2025-09-04 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:55.838253 | orchestrator | 2025-09-04 01:07:55 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:55.839249 | orchestrator | 2025-09-04 01:07:55 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:55.839925 | orchestrator | 2025-09-04 01:07:55 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:55.840680 | orchestrator | 2025-09-04 01:07:55 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:55.840701 | orchestrator | 2025-09-04 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:07:58.892774 | orchestrator | 2025-09-04 01:07:58 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:07:58.893253 | orchestrator | 2025-09-04 01:07:58 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:07:58.894464 | orchestrator | 2025-09-04 01:07:58 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:07:58.895286 | orchestrator | 2025-09-04 01:07:58 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:07:58.895310 | orchestrator | 2025-09-04 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:01.944111 | orchestrator | 2025-09-04 01:08:01 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:01.946540 | orchestrator | 2025-09-04 01:08:01 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:01.949571 | orchestrator | 2025-09-04 01:08:01 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:01.951924 | orchestrator | 2025-09-04 01:08:01 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:08:01.952207 | orchestrator | 2025-09-04 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:04.993186 | orchestrator | 2025-09-04 01:08:04 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:04.994913 | orchestrator | 2025-09-04 01:08:04 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:04.996190 | orchestrator | 2025-09-04 01:08:04 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:04.997842 | orchestrator | 2025-09-04 01:08:04 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:08:04.997876 | orchestrator | 2025-09-04 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:08.036294 | orchestrator | 2025-09-04 01:08:08 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:08.040828 | orchestrator | 2025-09-04 01:08:08 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:08.043710 | orchestrator | 2025-09-04 01:08:08 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:08.046068 | orchestrator | 2025-09-04 01:08:08 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state STARTED 2025-09-04 01:08:08.046555 | orchestrator | 2025-09-04 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:11.093635 | orchestrator | 2025-09-04 01:08:11 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:11.095088 | orchestrator | 2025-09-04 01:08:11 | INFO  | Task ba22c6f3-3ffe-448e-bf1f-d2593a40ab9d is in state STARTED 2025-09-04 01:08:11.098706 | orchestrator | 2025-09-04 01:08:11 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:11.100132 | orchestrator | 2025-09-04 01:08:11 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:11.102199 | orchestrator | 2025-09-04 01:08:11 | INFO  | Task 84cd4556-0088-4a34-8d8b-feb9399c7994 is in state SUCCESS 2025-09-04 01:08:11.103949 | orchestrator | 2025-09-04 01:08:11.103974 | orchestrator | 2025-09-04 01:08:11.103981 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:08:11.103988 | orchestrator | 2025-09-04 01:08:11.103993 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:08:11.103999 | orchestrator | Thursday 04 September 2025 01:06:56 +0000 (0:00:00.284) 0:00:00.284 **** 2025-09-04 01:08:11.104005 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:08:11.104012 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:08:11.104018 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:08:11.104024 | orchestrator | 2025-09-04 01:08:11.104029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:08:11.104035 | orchestrator | Thursday 04 September 2025 01:06:56 +0000 (0:00:00.300) 0:00:00.584 **** 2025-09-04 01:08:11.104041 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-04 01:08:11.104047 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-04 01:08:11.104052 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-04 01:08:11.104058 | orchestrator | 2025-09-04 01:08:11.104063 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-04 01:08:11.104068 | orchestrator | 2025-09-04 01:08:11.104074 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-04 01:08:11.104079 | orchestrator | Thursday 04 September 2025 01:06:57 +0000 (0:00:00.428) 0:00:01.013 **** 2025-09-04 01:08:11.104085 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:08:11.104111 | orchestrator | 2025-09-04 01:08:11.104117 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-04 01:08:11.104122 | orchestrator | Thursday 04 September 2025 01:06:57 +0000 (0:00:00.525) 0:00:01.539 **** 2025-09-04 01:08:11.104127 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-04 01:08:11.104133 | orchestrator | 2025-09-04 01:08:11.104138 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-04 01:08:11.104143 | orchestrator | Thursday 04 September 2025 01:07:01 +0000 (0:00:03.775) 0:00:05.314 **** 2025-09-04 01:08:11.104148 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-04 01:08:11.104154 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-04 01:08:11.104160 | orchestrator | 2025-09-04 01:08:11.104165 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-04 01:08:11.104171 | orchestrator | Thursday 04 September 2025 01:07:07 +0000 (0:00:06.072) 0:00:11.386 **** 2025-09-04 01:08:11.104176 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:08:11.104182 | orchestrator | 2025-09-04 01:08:11.104187 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-04 01:08:11.104192 | orchestrator | Thursday 04 September 2025 01:07:10 +0000 (0:00:03.425) 0:00:14.812 **** 2025-09-04 01:08:11.104198 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:08:11.104203 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-04 01:08:11.104208 | orchestrator | 2025-09-04 01:08:11.104213 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-04 01:08:11.104219 | orchestrator | Thursday 04 September 2025 01:07:14 +0000 (0:00:03.975) 0:00:18.787 **** 2025-09-04 01:08:11.104224 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:08:11.104230 | orchestrator | 2025-09-04 01:08:11.104235 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-04 01:08:11.104241 | orchestrator | Thursday 04 September 2025 01:07:18 +0000 (0:00:03.462) 0:00:22.250 **** 2025-09-04 01:08:11.104246 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-04 01:08:11.104251 | orchestrator | 2025-09-04 01:08:11.104257 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-04 01:08:11.104262 | orchestrator | Thursday 04 September 2025 01:07:22 +0000 (0:00:04.315) 0:00:26.565 **** 2025-09-04 01:08:11.104267 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104273 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:08:11.104278 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:08:11.104283 | orchestrator | 2025-09-04 01:08:11.104289 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-04 01:08:11.104294 | orchestrator | Thursday 04 September 2025 01:07:22 +0000 (0:00:00.288) 0:00:26.854 **** 2025-09-04 01:08:11.104313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104348 | orchestrator | 2025-09-04 01:08:11.104353 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-04 01:08:11.104359 | orchestrator | Thursday 04 September 2025 01:07:23 +0000 (0:00:00.799) 0:00:27.654 **** 2025-09-04 01:08:11.104364 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104369 | orchestrator | 2025-09-04 01:08:11.104375 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-04 01:08:11.104380 | orchestrator | Thursday 04 September 2025 01:07:23 +0000 (0:00:00.119) 0:00:27.773 **** 2025-09-04 01:08:11.104385 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104391 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:08:11.104396 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:08:11.104402 | orchestrator | 2025-09-04 01:08:11.104407 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-04 01:08:11.104412 | orchestrator | Thursday 04 September 2025 01:07:24 +0000 (0:00:00.538) 0:00:28.312 **** 2025-09-04 01:08:11.104418 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:08:11.104424 | orchestrator | 2025-09-04 01:08:11.104429 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-04 01:08:11.104435 | orchestrator | Thursday 04 September 2025 01:07:24 +0000 (0:00:00.512) 0:00:28.824 **** 2025-09-04 01:08:11.104444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104471 | orchestrator | 2025-09-04 01:08:11.104477 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-04 01:08:11.104482 | orchestrator | Thursday 04 September 2025 01:07:26 +0000 (0:00:01.693) 0:00:30.517 **** 2025-09-04 01:08:11.104488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104493 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104508 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:08:11.104520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104526 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:08:11.104532 | orchestrator | 2025-09-04 01:08:11.104537 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-04 01:08:11.104542 | orchestrator | Thursday 04 September 2025 01:07:27 +0000 (0:00:00.843) 0:00:31.361 **** 2025-09-04 01:08:11.104548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104553 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104566 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:08:11.104573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104583 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:08:11.104589 | orchestrator | 2025-09-04 01:08:11.104599 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-04 01:08:11.104605 | orchestrator | Thursday 04 September 2025 01:07:28 +0000 (0:00:00.679) 0:00:32.040 **** 2025-09-04 01:08:11.104615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104637 | orchestrator | 2025-09-04 01:08:11.104643 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-04 01:08:11.104650 | orchestrator | Thursday 04 September 2025 01:07:29 +0000 (0:00:01.533) 0:00:33.574 **** 2025-09-04 01:08:11.104657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104690 | orchestrator | 2025-09-04 01:08:11.104696 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-04 01:08:11.104702 | orchestrator | Thursday 04 September 2025 01:07:33 +0000 (0:00:03.989) 0:00:37.563 **** 2025-09-04 01:08:11.104709 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-04 01:08:11.104715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-04 01:08:11.104722 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-04 01:08:11.104729 | orchestrator | 2025-09-04 01:08:11.104735 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-04 01:08:11.104741 | orchestrator | Thursday 04 September 2025 01:07:35 +0000 (0:00:01.847) 0:00:39.411 **** 2025-09-04 01:08:11.104748 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:08:11.104754 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:08:11.104760 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:08:11.104767 | orchestrator | 2025-09-04 01:08:11.104791 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-04 01:08:11.104801 | orchestrator | Thursday 04 September 2025 01:07:37 +0000 (0:00:01.581) 0:00:40.992 **** 2025-09-04 01:08:11.104810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104827 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:08:11.104847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104855 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:08:11.104866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-04 01:08:11.104873 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:08:11.104880 | orchestrator | 2025-09-04 01:08:11.104885 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-04 01:08:11.104891 | orchestrator | Thursday 04 September 2025 01:07:37 +0000 (0:00:00.687) 0:00:41.679 **** 2025-09-04 01:08:11.104896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-04 01:08:11.104920 | orchestrator | 2025-09-04 01:08:11.104926 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-04 01:08:11.104931 | orchestrator | Thursday 04 September 2025 01:07:39 +0000 (0:00:01.316) 0:00:42.996 **** 2025-09-04 01:08:11.104936 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:08:11.104942 | orchestrator | 2025-09-04 01:08:11.104947 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-04 01:08:11.104952 | orchestrator | Thursday 04 September 2025 01:07:41 +0000 (0:00:02.821) 0:00:45.817 **** 2025-09-04 01:08:11.104958 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:08:11.104963 | orchestrator | 2025-09-04 01:08:11.104968 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-04 01:08:11.104974 | orchestrator | Thursday 04 September 2025 01:07:44 +0000 (0:00:02.454) 0:00:48.272 **** 2025-09-04 01:08:11.104979 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:08:11.104984 | orchestrator | 2025-09-04 01:08:11.104989 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-04 01:08:11.104995 | orchestrator | Thursday 04 September 2025 01:07:57 +0000 (0:00:13.518) 0:01:01.790 **** 2025-09-04 01:08:11.105000 | orchestrator | 2025-09-04 01:08:11.105005 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-04 01:08:11.105011 | orchestrator | Thursday 04 September 2025 01:07:57 +0000 (0:00:00.063) 0:01:01.853 **** 2025-09-04 01:08:11.105016 | orchestrator | 2025-09-04 01:08:11.105024 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-04 01:08:11.105030 | orchestrator | Thursday 04 September 2025 01:07:57 +0000 (0:00:00.065) 0:01:01.919 **** 2025-09-04 01:08:11.105035 | orchestrator | 2025-09-04 01:08:11.105040 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-04 01:08:11.105046 | orchestrator | Thursday 04 September 2025 01:07:58 +0000 (0:00:00.069) 0:01:01.988 **** 2025-09-04 01:08:11.105051 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:08:11.105056 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:08:11.105062 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:08:11.105067 | orchestrator | 2025-09-04 01:08:11.105073 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:08:11.105079 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:08:11.105086 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:08:11.105092 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:08:11.105104 | orchestrator | 2025-09-04 01:08:11.105109 | orchestrator | 2025-09-04 01:08:11.105115 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:08:11.105120 | orchestrator | Thursday 04 September 2025 01:08:09 +0000 (0:00:11.238) 0:01:13.227 **** 2025-09-04 01:08:11.105125 | orchestrator | =============================================================================== 2025-09-04 01:08:11.105131 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.52s 2025-09-04 01:08:11.105136 | orchestrator | placement : Restart placement-api container ---------------------------- 11.24s 2025-09-04 01:08:11.105141 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.07s 2025-09-04 01:08:11.105147 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.32s 2025-09-04 01:08:11.105152 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.99s 2025-09-04 01:08:11.105157 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.98s 2025-09-04 01:08:11.105162 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.78s 2025-09-04 01:08:11.105168 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.46s 2025-09-04 01:08:11.105173 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.43s 2025-09-04 01:08:11.105178 | orchestrator | placement : Creating placement databases -------------------------------- 2.82s 2025-09-04 01:08:11.105184 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2025-09-04 01:08:11.105189 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.85s 2025-09-04 01:08:11.105194 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.69s 2025-09-04 01:08:11.105199 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.58s 2025-09-04 01:08:11.105205 | orchestrator | placement : Copying over config.json files for services ----------------- 1.53s 2025-09-04 01:08:11.105210 | orchestrator | placement : Check placement containers ---------------------------------- 1.32s 2025-09-04 01:08:11.105216 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.84s 2025-09-04 01:08:11.105221 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2025-09-04 01:08:11.105226 | orchestrator | placement : Copying over existing policy file --------------------------- 0.69s 2025-09-04 01:08:11.105232 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.68s 2025-09-04 01:08:11.105237 | orchestrator | 2025-09-04 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:14.150287 | orchestrator | 2025-09-04 01:08:14 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:14.151128 | orchestrator | 2025-09-04 01:08:14 | INFO  | Task ba22c6f3-3ffe-448e-bf1f-d2593a40ab9d is in state STARTED 2025-09-04 01:08:14.151990 | orchestrator | 2025-09-04 01:08:14 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:14.152866 | orchestrator | 2025-09-04 01:08:14 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:14.152903 | orchestrator | 2025-09-04 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:17.185140 | orchestrator | 2025-09-04 01:08:17 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:17.185411 | orchestrator | 2025-09-04 01:08:17 | INFO  | Task ba22c6f3-3ffe-448e-bf1f-d2593a40ab9d is in state SUCCESS 2025-09-04 01:08:17.186169 | orchestrator | 2025-09-04 01:08:17 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:17.187434 | orchestrator | 2025-09-04 01:08:17 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:17.187547 | orchestrator | 2025-09-04 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:20.223118 | orchestrator | 2025-09-04 01:08:20 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:20.225895 | orchestrator | 2025-09-04 01:08:20 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:20.227907 | orchestrator | 2025-09-04 01:08:20 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:20.229905 | orchestrator | 2025-09-04 01:08:20 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:20.230127 | orchestrator | 2025-09-04 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:23.271707 | orchestrator | 2025-09-04 01:08:23 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:23.274339 | orchestrator | 2025-09-04 01:08:23 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:23.276296 | orchestrator | 2025-09-04 01:08:23 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:23.277988 | orchestrator | 2025-09-04 01:08:23 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:23.278145 | orchestrator | 2025-09-04 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:26.324112 | orchestrator | 2025-09-04 01:08:26 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:26.328494 | orchestrator | 2025-09-04 01:08:26 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:26.331164 | orchestrator | 2025-09-04 01:08:26 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:26.333378 | orchestrator | 2025-09-04 01:08:26 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:26.333405 | orchestrator | 2025-09-04 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:29.376276 | orchestrator | 2025-09-04 01:08:29 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:29.378893 | orchestrator | 2025-09-04 01:08:29 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:29.381170 | orchestrator | 2025-09-04 01:08:29 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:29.385366 | orchestrator | 2025-09-04 01:08:29 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:29.385429 | orchestrator | 2025-09-04 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:32.426483 | orchestrator | 2025-09-04 01:08:32 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:32.428629 | orchestrator | 2025-09-04 01:08:32 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:32.430409 | orchestrator | 2025-09-04 01:08:32 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:32.433490 | orchestrator | 2025-09-04 01:08:32 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:32.433910 | orchestrator | 2025-09-04 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:35.478676 | orchestrator | 2025-09-04 01:08:35 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:35.479966 | orchestrator | 2025-09-04 01:08:35 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:35.482312 | orchestrator | 2025-09-04 01:08:35 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:35.483916 | orchestrator | 2025-09-04 01:08:35 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:35.483944 | orchestrator | 2025-09-04 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:38.530333 | orchestrator | 2025-09-04 01:08:38 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:38.532112 | orchestrator | 2025-09-04 01:08:38 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:38.534122 | orchestrator | 2025-09-04 01:08:38 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:38.537259 | orchestrator | 2025-09-04 01:08:38 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:38.537612 | orchestrator | 2025-09-04 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:41.585266 | orchestrator | 2025-09-04 01:08:41 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:41.585481 | orchestrator | 2025-09-04 01:08:41 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:41.586589 | orchestrator | 2025-09-04 01:08:41 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:41.587521 | orchestrator | 2025-09-04 01:08:41 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:41.587544 | orchestrator | 2025-09-04 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:44.621898 | orchestrator | 2025-09-04 01:08:44 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:44.622009 | orchestrator | 2025-09-04 01:08:44 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:44.622082 | orchestrator | 2025-09-04 01:08:44 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:44.622095 | orchestrator | 2025-09-04 01:08:44 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:44.622106 | orchestrator | 2025-09-04 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:47.659210 | orchestrator | 2025-09-04 01:08:47 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:47.660924 | orchestrator | 2025-09-04 01:08:47 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:47.663062 | orchestrator | 2025-09-04 01:08:47 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:47.665946 | orchestrator | 2025-09-04 01:08:47 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:47.666300 | orchestrator | 2025-09-04 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:50.716532 | orchestrator | 2025-09-04 01:08:50 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:50.717724 | orchestrator | 2025-09-04 01:08:50 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:50.719283 | orchestrator | 2025-09-04 01:08:50 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:50.722000 | orchestrator | 2025-09-04 01:08:50 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:50.722084 | orchestrator | 2025-09-04 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:53.771894 | orchestrator | 2025-09-04 01:08:53 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:53.773547 | orchestrator | 2025-09-04 01:08:53 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:53.776681 | orchestrator | 2025-09-04 01:08:53 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:53.778130 | orchestrator | 2025-09-04 01:08:53 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:53.778481 | orchestrator | 2025-09-04 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:56.823259 | orchestrator | 2025-09-04 01:08:56 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:56.824891 | orchestrator | 2025-09-04 01:08:56 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:56.828525 | orchestrator | 2025-09-04 01:08:56 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:56.830700 | orchestrator | 2025-09-04 01:08:56 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:56.831390 | orchestrator | 2025-09-04 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:08:59.875093 | orchestrator | 2025-09-04 01:08:59 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state STARTED 2025-09-04 01:08:59.876453 | orchestrator | 2025-09-04 01:08:59 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:08:59.878128 | orchestrator | 2025-09-04 01:08:59 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:08:59.879810 | orchestrator | 2025-09-04 01:08:59 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:08:59.879834 | orchestrator | 2025-09-04 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:02.928743 | orchestrator | 2025-09-04 01:09:02 | INFO  | Task f492c67d-0836-4173-b7fd-7a6578686478 is in state SUCCESS 2025-09-04 01:09:02.931789 | orchestrator | 2025-09-04 01:09:02.932121 | orchestrator | 2025-09-04 01:09:02.932141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:09:02.932153 | orchestrator | 2025-09-04 01:09:02.932164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:09:02.932176 | orchestrator | Thursday 04 September 2025 01:08:13 +0000 (0:00:00.193) 0:00:00.193 **** 2025-09-04 01:09:02.932187 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:02.932200 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:02.932211 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:02.932222 | orchestrator | 2025-09-04 01:09:02.932233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:09:02.932244 | orchestrator | Thursday 04 September 2025 01:08:14 +0000 (0:00:00.386) 0:00:00.579 **** 2025-09-04 01:09:02.932255 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-04 01:09:02.932267 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-04 01:09:02.932278 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-04 01:09:02.932289 | orchestrator | 2025-09-04 01:09:02.932299 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-04 01:09:02.932310 | orchestrator | 2025-09-04 01:09:02.932321 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-04 01:09:02.932332 | orchestrator | Thursday 04 September 2025 01:08:15 +0000 (0:00:00.788) 0:00:01.367 **** 2025-09-04 01:09:02.932343 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:02.932354 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:02.932365 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:02.932376 | orchestrator | 2025-09-04 01:09:02.932386 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:09:02.932398 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:09:02.932436 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:09:02.932448 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:09:02.932458 | orchestrator | 2025-09-04 01:09:02.932469 | orchestrator | 2025-09-04 01:09:02.932481 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:09:02.932491 | orchestrator | Thursday 04 September 2025 01:08:15 +0000 (0:00:00.826) 0:00:02.193 **** 2025-09-04 01:09:02.932502 | orchestrator | =============================================================================== 2025-09-04 01:09:02.932513 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.83s 2025-09-04 01:09:02.932524 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-09-04 01:09:02.932535 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-09-04 01:09:02.932545 | orchestrator | 2025-09-04 01:09:02.932556 | orchestrator | 2025-09-04 01:09:02.932567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:09:02.932578 | orchestrator | 2025-09-04 01:09:02.932589 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:09:02.932600 | orchestrator | Thursday 04 September 2025 01:07:10 +0000 (0:00:00.457) 0:00:00.457 **** 2025-09-04 01:09:02.932610 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:02.932621 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:02.932632 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:02.932642 | orchestrator | 2025-09-04 01:09:02.932653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:09:02.932664 | orchestrator | Thursday 04 September 2025 01:07:10 +0000 (0:00:00.438) 0:00:00.896 **** 2025-09-04 01:09:02.932675 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-04 01:09:02.932686 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-04 01:09:02.932697 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-04 01:09:02.932708 | orchestrator | 2025-09-04 01:09:02.932718 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-04 01:09:02.932732 | orchestrator | 2025-09-04 01:09:02.932746 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-04 01:09:02.932780 | orchestrator | Thursday 04 September 2025 01:07:11 +0000 (0:00:00.463) 0:00:01.359 **** 2025-09-04 01:09:02.932793 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:02.932807 | orchestrator | 2025-09-04 01:09:02.932820 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-04 01:09:02.932846 | orchestrator | Thursday 04 September 2025 01:07:11 +0000 (0:00:00.568) 0:00:01.928 **** 2025-09-04 01:09:02.932861 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-04 01:09:02.932874 | orchestrator | 2025-09-04 01:09:02.932887 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-04 01:09:02.932900 | orchestrator | Thursday 04 September 2025 01:07:14 +0000 (0:00:03.387) 0:00:05.315 **** 2025-09-04 01:09:02.932912 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-04 01:09:02.932925 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-04 01:09:02.932938 | orchestrator | 2025-09-04 01:09:02.932950 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-04 01:09:02.932962 | orchestrator | Thursday 04 September 2025 01:07:21 +0000 (0:00:06.802) 0:00:12.117 **** 2025-09-04 01:09:02.932975 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:09:02.932988 | orchestrator | 2025-09-04 01:09:02.933000 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-04 01:09:02.933021 | orchestrator | Thursday 04 September 2025 01:07:25 +0000 (0:00:03.748) 0:00:15.866 **** 2025-09-04 01:09:02.933046 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:09:02.933060 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-04 01:09:02.933074 | orchestrator | 2025-09-04 01:09:02.933085 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-04 01:09:02.933095 | orchestrator | Thursday 04 September 2025 01:07:29 +0000 (0:00:04.344) 0:00:20.210 **** 2025-09-04 01:09:02.933106 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:09:02.933117 | orchestrator | 2025-09-04 01:09:02.933127 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-04 01:09:02.933139 | orchestrator | Thursday 04 September 2025 01:07:33 +0000 (0:00:03.359) 0:00:23.570 **** 2025-09-04 01:09:02.933149 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-04 01:09:02.933160 | orchestrator | 2025-09-04 01:09:02.933171 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-04 01:09:02.933182 | orchestrator | Thursday 04 September 2025 01:07:37 +0000 (0:00:04.444) 0:00:28.014 **** 2025-09-04 01:09:02.933192 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.933203 | orchestrator | 2025-09-04 01:09:02.933214 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-04 01:09:02.933225 | orchestrator | Thursday 04 September 2025 01:07:41 +0000 (0:00:03.578) 0:00:31.593 **** 2025-09-04 01:09:02.933235 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.933246 | orchestrator | 2025-09-04 01:09:02.933256 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-04 01:09:02.933267 | orchestrator | Thursday 04 September 2025 01:07:45 +0000 (0:00:04.069) 0:00:35.663 **** 2025-09-04 01:09:02.933355 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.933367 | orchestrator | 2025-09-04 01:09:02.933378 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-04 01:09:02.933389 | orchestrator | Thursday 04 September 2025 01:07:49 +0000 (0:00:03.871) 0:00:39.535 **** 2025-09-04 01:09:02.933405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933506 | orchestrator | 2025-09-04 01:09:02.933518 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-04 01:09:02.933529 | orchestrator | Thursday 04 September 2025 01:07:50 +0000 (0:00:01.466) 0:00:41.001 **** 2025-09-04 01:09:02.933540 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.933551 | orchestrator | 2025-09-04 01:09:02.933562 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-04 01:09:02.933572 | orchestrator | Thursday 04 September 2025 01:07:50 +0000 (0:00:00.143) 0:00:41.144 **** 2025-09-04 01:09:02.933583 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.933594 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:02.933605 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:02.933616 | orchestrator | 2025-09-04 01:09:02.933627 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-04 01:09:02.933644 | orchestrator | Thursday 04 September 2025 01:07:51 +0000 (0:00:00.474) 0:00:41.618 **** 2025-09-04 01:09:02.933655 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:09:02.933665 | orchestrator | 2025-09-04 01:09:02.933676 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-04 01:09:02.933687 | orchestrator | Thursday 04 September 2025 01:07:52 +0000 (0:00:00.860) 0:00:42.479 **** 2025-09-04 01:09:02.933703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.933816 | orchestrator | 2025-09-04 01:09:02.933827 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-04 01:09:02.933838 | orchestrator | Thursday 04 September 2025 01:07:54 +0000 (0:00:02.482) 0:00:44.961 **** 2025-09-04 01:09:02.933850 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:02.933861 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:02.933871 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:02.933882 | orchestrator | 2025-09-04 01:09:02.933893 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-04 01:09:02.933910 | orchestrator | Thursday 04 September 2025 01:07:54 +0000 (0:00:00.306) 0:00:45.267 **** 2025-09-04 01:09:02.933922 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:02.933933 | orchestrator | 2025-09-04 01:09:02.933944 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-04 01:09:02.933955 | orchestrator | Thursday 04 September 2025 01:07:55 +0000 (0:00:00.728) 0:00:45.996 **** 2025-09-04 01:09:02.933967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.933999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934119 | orchestrator | 2025-09-04 01:09:02.934132 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-04 01:09:02.934144 | orchestrator | Thursday 04 September 2025 01:07:58 +0000 (0:00:02.371) 0:00:48.367 **** 2025-09-04 01:09:02.934158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934194 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.934213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934249 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:02.934262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934296 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:02.934308 | orchestrator | 2025-09-04 01:09:02.934321 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-04 01:09:02.934335 | orchestrator | Thursday 04 September 2025 01:07:58 +0000 (0:00:00.751) 0:00:49.119 **** 2025-09-04 01:09:02.934356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934380 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.934398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934428 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:02.934440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934468 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:02.934479 | orchestrator | 2025-09-04 01:09:02.934491 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-04 01:09:02.934501 | orchestrator | Thursday 04 September 2025 01:08:00 +0000 (0:00:01.372) 0:00:50.491 **** 2025-09-04 01:09:02.934519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934607 | orchestrator | 2025-09-04 01:09:02.934618 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-04 01:09:02.934629 | orchestrator | Thursday 04 September 2025 01:08:02 +0000 (0:00:02.510) 0:00:53.001 **** 2025-09-04 01:09:02.934640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.934685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.934734 | orchestrator | 2025-09-04 01:09:02.934745 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-04 01:09:02.934774 | orchestrator | Thursday 04 September 2025 01:08:08 +0000 (0:00:05.474) 0:00:58.476 **** 2025-09-04 01:09:02.934785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.934797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.934808 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.934973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.935003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.935023 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:02.935035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-04 01:09:02.935047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:02.935059 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:02.935070 | orchestrator | 2025-09-04 01:09:02.935082 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-04 01:09:02.935093 | orchestrator | Thursday 04 September 2025 01:08:08 +0000 (0:00:00.631) 0:00:59.108 **** 2025-09-04 01:09:02.935117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.935137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.935155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-04 01:09:02.935168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.935180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.935196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:02.935208 | orchestrator | 2025-09-04 01:09:02.935219 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-04 01:09:02.935231 | orchestrator | Thursday 04 September 2025 01:08:11 +0000 (0:00:02.656) 0:01:01.765 **** 2025-09-04 01:09:02.935242 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:02.935253 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:02.935265 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:02.935276 | orchestrator | 2025-09-04 01:09:02.935287 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-04 01:09:02.935299 | orchestrator | Thursday 04 September 2025 01:08:11 +0000 (0:00:00.284) 0:01:02.049 **** 2025-09-04 01:09:02.935309 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.935320 | orchestrator | 2025-09-04 01:09:02.935337 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-04 01:09:02.935348 | orchestrator | Thursday 04 September 2025 01:08:14 +0000 (0:00:02.356) 0:01:04.406 **** 2025-09-04 01:09:02.935358 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.935369 | orchestrator | 2025-09-04 01:09:02.935379 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-04 01:09:02.935390 | orchestrator | Thursday 04 September 2025 01:08:16 +0000 (0:00:02.417) 0:01:06.823 **** 2025-09-04 01:09:02.935408 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.935420 | orchestrator | 2025-09-04 01:09:02.935430 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-04 01:09:02.935441 | orchestrator | Thursday 04 September 2025 01:08:36 +0000 (0:00:19.728) 0:01:26.552 **** 2025-09-04 01:09:02.935452 | orchestrator | 2025-09-04 01:09:02.935462 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-04 01:09:02.935473 | orchestrator | Thursday 04 September 2025 01:08:36 +0000 (0:00:00.065) 0:01:26.617 **** 2025-09-04 01:09:02.935484 | orchestrator | 2025-09-04 01:09:02.935494 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-04 01:09:02.935505 | orchestrator | Thursday 04 September 2025 01:08:36 +0000 (0:00:00.068) 0:01:26.686 **** 2025-09-04 01:09:02.935516 | orchestrator | 2025-09-04 01:09:02.935526 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-04 01:09:02.935537 | orchestrator | Thursday 04 September 2025 01:08:36 +0000 (0:00:00.075) 0:01:26.761 **** 2025-09-04 01:09:02.935547 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.935558 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:02.935569 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:02.935580 | orchestrator | 2025-09-04 01:09:02.935590 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-04 01:09:02.935601 | orchestrator | Thursday 04 September 2025 01:08:51 +0000 (0:00:15.451) 0:01:42.213 **** 2025-09-04 01:09:02.935616 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:02.935629 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:02.935642 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:02.935655 | orchestrator | 2025-09-04 01:09:02.935669 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:09:02.935683 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-04 01:09:02.935696 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:09:02.935710 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:09:02.935722 | orchestrator | 2025-09-04 01:09:02.935735 | orchestrator | 2025-09-04 01:09:02.935747 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:09:02.935791 | orchestrator | Thursday 04 September 2025 01:09:02 +0000 (0:00:10.389) 0:01:52.602 **** 2025-09-04 01:09:02.935803 | orchestrator | =============================================================================== 2025-09-04 01:09:02.935816 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.73s 2025-09-04 01:09:02.935829 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.45s 2025-09-04 01:09:02.935842 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.39s 2025-09-04 01:09:02.935854 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.80s 2025-09-04 01:09:02.935867 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.48s 2025-09-04 01:09:02.935878 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.44s 2025-09-04 01:09:02.935891 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.34s 2025-09-04 01:09:02.935909 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.07s 2025-09-04 01:09:02.935922 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.87s 2025-09-04 01:09:02.935935 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.75s 2025-09-04 01:09:02.935948 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.58s 2025-09-04 01:09:02.935962 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.39s 2025-09-04 01:09:02.935973 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.36s 2025-09-04 01:09:02.935984 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.66s 2025-09-04 01:09:02.935995 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.51s 2025-09-04 01:09:02.936014 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.48s 2025-09-04 01:09:02.936025 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2025-09-04 01:09:02.936036 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.37s 2025-09-04 01:09:02.936047 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.36s 2025-09-04 01:09:02.936058 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.47s 2025-09-04 01:09:02.936068 | orchestrator | 2025-09-04 01:09:02 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:02.936079 | orchestrator | 2025-09-04 01:09:02 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:02.936090 | orchestrator | 2025-09-04 01:09:02 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:02.936101 | orchestrator | 2025-09-04 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:05.967921 | orchestrator | 2025-09-04 01:09:05 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:05.969001 | orchestrator | 2025-09-04 01:09:05 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:05.969849 | orchestrator | 2025-09-04 01:09:05 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:05.969876 | orchestrator | 2025-09-04 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:09.019257 | orchestrator | 2025-09-04 01:09:09 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:09.020544 | orchestrator | 2025-09-04 01:09:09 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:09.021617 | orchestrator | 2025-09-04 01:09:09 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:09.021830 | orchestrator | 2025-09-04 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:12.046449 | orchestrator | 2025-09-04 01:09:12 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:12.046547 | orchestrator | 2025-09-04 01:09:12 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:12.046562 | orchestrator | 2025-09-04 01:09:12 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:12.046574 | orchestrator | 2025-09-04 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:15.071957 | orchestrator | 2025-09-04 01:09:15 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:15.074696 | orchestrator | 2025-09-04 01:09:15 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:15.076191 | orchestrator | 2025-09-04 01:09:15 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:15.076245 | orchestrator | 2025-09-04 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:18.113021 | orchestrator | 2025-09-04 01:09:18 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:18.113475 | orchestrator | 2025-09-04 01:09:18 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:18.114460 | orchestrator | 2025-09-04 01:09:18 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:18.114486 | orchestrator | 2025-09-04 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:21.161583 | orchestrator | 2025-09-04 01:09:21 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:21.163450 | orchestrator | 2025-09-04 01:09:21 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:21.165336 | orchestrator | 2025-09-04 01:09:21 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:21.165394 | orchestrator | 2025-09-04 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:24.207371 | orchestrator | 2025-09-04 01:09:24 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:24.209417 | orchestrator | 2025-09-04 01:09:24 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:24.211402 | orchestrator | 2025-09-04 01:09:24 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:24.211429 | orchestrator | 2025-09-04 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:27.258536 | orchestrator | 2025-09-04 01:09:27 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:27.261156 | orchestrator | 2025-09-04 01:09:27 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:27.264800 | orchestrator | 2025-09-04 01:09:27 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:27.264828 | orchestrator | 2025-09-04 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:30.314927 | orchestrator | 2025-09-04 01:09:30 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:30.316598 | orchestrator | 2025-09-04 01:09:30 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:30.319460 | orchestrator | 2025-09-04 01:09:30 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:30.320167 | orchestrator | 2025-09-04 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:33.360095 | orchestrator | 2025-09-04 01:09:33 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:33.362131 | orchestrator | 2025-09-04 01:09:33 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state STARTED 2025-09-04 01:09:33.364892 | orchestrator | 2025-09-04 01:09:33 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:33.365529 | orchestrator | 2025-09-04 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:36.405958 | orchestrator | 2025-09-04 01:09:36 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:36.410277 | orchestrator | 2025-09-04 01:09:36 | INFO  | Task b0c402be-87bb-4fff-a097-e869a0b59ec5 is in state SUCCESS 2025-09-04 01:09:36.411862 | orchestrator | 2025-09-04 01:09:36.411955 | orchestrator | 2025-09-04 01:09:36.411969 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:09:36.412008 | orchestrator | 2025-09-04 01:09:36.412019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:09:36.412029 | orchestrator | Thursday 04 September 2025 01:07:17 +0000 (0:00:00.266) 0:00:00.266 **** 2025-09-04 01:09:36.412038 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:36.412049 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:36.412059 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:36.412069 | orchestrator | 2025-09-04 01:09:36.412079 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:09:36.412089 | orchestrator | Thursday 04 September 2025 01:07:17 +0000 (0:00:00.287) 0:00:00.553 **** 2025-09-04 01:09:36.412099 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-04 01:09:36.412109 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-04 01:09:36.412118 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-04 01:09:36.412128 | orchestrator | 2025-09-04 01:09:36.412137 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-04 01:09:36.412147 | orchestrator | 2025-09-04 01:09:36.412156 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-04 01:09:36.412728 | orchestrator | Thursday 04 September 2025 01:07:18 +0000 (0:00:00.401) 0:00:00.955 **** 2025-09-04 01:09:36.412771 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:36.412783 | orchestrator | 2025-09-04 01:09:36.412793 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-04 01:09:36.412803 | orchestrator | Thursday 04 September 2025 01:07:18 +0000 (0:00:00.514) 0:00:01.469 **** 2025-09-04 01:09:36.412816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.412830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.412853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.412864 | orchestrator | 2025-09-04 01:09:36.412874 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-04 01:09:36.412883 | orchestrator | Thursday 04 September 2025 01:07:19 +0000 (0:00:00.757) 0:00:02.227 **** 2025-09-04 01:09:36.412893 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-04 01:09:36.412914 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-04 01:09:36.412924 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:09:36.412933 | orchestrator | 2025-09-04 01:09:36.412943 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-04 01:09:36.413017 | orchestrator | Thursday 04 September 2025 01:07:20 +0000 (0:00:00.868) 0:00:03.095 **** 2025-09-04 01:09:36.413032 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:36.413042 | orchestrator | 2025-09-04 01:09:36.413052 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-04 01:09:36.413061 | orchestrator | Thursday 04 September 2025 01:07:20 +0000 (0:00:00.699) 0:00:03.795 **** 2025-09-04 01:09:36.413107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413140 | orchestrator | 2025-09-04 01:09:36.413150 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-04 01:09:36.413160 | orchestrator | Thursday 04 September 2025 01:07:22 +0000 (0:00:01.476) 0:00:05.272 **** 2025-09-04 01:09:36.413175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413186 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.413196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413213 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.413249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413261 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.413271 | orchestrator | 2025-09-04 01:09:36.413280 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-04 01:09:36.413290 | orchestrator | Thursday 04 September 2025 01:07:22 +0000 (0:00:00.374) 0:00:05.647 **** 2025-09-04 01:09:36.413300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413310 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.413319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413329 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.413339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-04 01:09:36.413349 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.413359 | orchestrator | 2025-09-04 01:09:36.413368 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-04 01:09:36.413384 | orchestrator | Thursday 04 September 2025 01:07:23 +0000 (0:00:01.004) 0:00:06.651 **** 2025-09-04 01:09:36.413399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413457 | orchestrator | 2025-09-04 01:09:36.413466 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-04 01:09:36.413476 | orchestrator | Thursday 04 September 2025 01:07:25 +0000 (0:00:01.395) 0:00:08.046 **** 2025-09-04 01:09:36.413485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.413526 | orchestrator | 2025-09-04 01:09:36.413536 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-04 01:09:36.413545 | orchestrator | Thursday 04 September 2025 01:07:26 +0000 (0:00:01.464) 0:00:09.510 **** 2025-09-04 01:09:36.413555 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.413565 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.413574 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.413584 | orchestrator | 2025-09-04 01:09:36.413593 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-04 01:09:36.413603 | orchestrator | Thursday 04 September 2025 01:07:27 +0000 (0:00:00.461) 0:00:09.972 **** 2025-09-04 01:09:36.413612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-04 01:09:36.413622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-04 01:09:36.413631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-04 01:09:36.413640 | orchestrator | 2025-09-04 01:09:36.413650 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-04 01:09:36.413659 | orchestrator | Thursday 04 September 2025 01:07:28 +0000 (0:00:01.201) 0:00:11.173 **** 2025-09-04 01:09:36.413669 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-04 01:09:36.413679 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-04 01:09:36.413688 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-04 01:09:36.413698 | orchestrator | 2025-09-04 01:09:36.413707 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-04 01:09:36.413717 | orchestrator | Thursday 04 September 2025 01:07:29 +0000 (0:00:01.342) 0:00:12.515 **** 2025-09-04 01:09:36.413771 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:09:36.413783 | orchestrator | 2025-09-04 01:09:36.413793 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-04 01:09:36.413803 | orchestrator | Thursday 04 September 2025 01:07:30 +0000 (0:00:01.335) 0:00:13.851 **** 2025-09-04 01:09:36.413812 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-04 01:09:36.413822 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-04 01:09:36.413831 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:36.413841 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:36.413850 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:36.413860 | orchestrator | 2025-09-04 01:09:36.413870 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-04 01:09:36.413880 | orchestrator | Thursday 04 September 2025 01:07:31 +0000 (0:00:00.861) 0:00:14.712 **** 2025-09-04 01:09:36.413889 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.413899 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.413909 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.413918 | orchestrator | 2025-09-04 01:09:36.413928 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-04 01:09:36.413937 | orchestrator | Thursday 04 September 2025 01:07:32 +0000 (0:00:00.698) 0:00:15.411 **** 2025-09-04 01:09:36.413948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093885, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5702717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.413966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093885, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5702717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.413981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1093885, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5702717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.413992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093976, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5850472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093976, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5850472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093976, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5850472, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093896, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5732718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093896, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5732718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093896, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5732718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093978, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5862718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093978, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5862718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093978, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5862718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093925, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5787828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093925, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5787828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093925, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5787828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093955, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.583693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093955, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.583693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093955, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.583693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093882, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5682716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093882, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5682716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1093882, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5682716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093893, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5712717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093893, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5712717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1093893, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5712717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093900, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5741775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093900, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5741775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093900, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5741775, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093935, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5799804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093935, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5799804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093935, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5799804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093971, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093971, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093971, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093894, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5722716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093894, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5722716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1093894, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5722716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093949, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5821075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093949, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5821075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093949, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5821075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093929, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5792718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093929, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5792718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093929, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5792718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093915, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5779185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093915, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5779185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093915, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5779185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093911, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5768356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093911, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5768356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093911, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5768356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093940, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5812716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093940, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5812716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093940, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5812716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093902, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.575787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093902, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.575787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093902, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.575787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093967, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093967, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093967, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5842717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094080, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6270137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094080, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6270137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094080, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6270137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094019, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5978956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094019, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5978956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094019, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5978956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094000, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5912716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.414998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094000, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5912716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094000, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5912716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094043, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.600707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094043, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.600707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094043, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.600707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093982, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5872717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093982, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5872717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093982, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5872717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094065, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094065, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094065, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094046, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6112719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094046, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6112719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094046, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6112719, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094069, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094069, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094069, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.617272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094077, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.623272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094077, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.623272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094077, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.623272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094063, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6152718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094063, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6152718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094063, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6152718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094038, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5996132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094038, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5996132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094038, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5996132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094016, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5942717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094016, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5942717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094030, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5992718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094016, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5942717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094030, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5992718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094003, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5932224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094030, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5992718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094003, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5932224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094041, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6002717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094003, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5932224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094041, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6002717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094072, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6222718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094041, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6002717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094072, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6222718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094071, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6192718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094072, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6222718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094071, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6192718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093984, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5886257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094071, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6192718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093984, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5886257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093989, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5896018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093984, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5886257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093989, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5896018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093989, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.5896018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094058, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.613272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094058, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.613272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094058, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.613272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094070, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6182718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094070, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6182718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094070, 'dev': 92, 'nlink': 1, 'atime': 1756944141.0, 'mtime': 1756944141.0, 'ctime': 1756944989.6182718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-04 01:09:36.415698 | orchestrator | 2025-09-04 01:09:36.415708 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-04 01:09:36.415717 | orchestrator | Thursday 04 September 2025 01:08:12 +0000 (0:00:39.894) 0:00:55.305 **** 2025-09-04 01:09:36.415727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.415766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.415777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-04 01:09:36.415787 | orchestrator | 2025-09-04 01:09:36.415797 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-04 01:09:36.415807 | orchestrator | Thursday 04 September 2025 01:08:13 +0000 (0:00:01.042) 0:00:56.348 **** 2025-09-04 01:09:36.415816 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:36.415826 | orchestrator | 2025-09-04 01:09:36.415836 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-04 01:09:36.415845 | orchestrator | Thursday 04 September 2025 01:08:15 +0000 (0:00:02.230) 0:00:58.578 **** 2025-09-04 01:09:36.415854 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:36.415864 | orchestrator | 2025-09-04 01:09:36.415873 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-04 01:09:36.415883 | orchestrator | Thursday 04 September 2025 01:08:17 +0000 (0:00:02.201) 0:01:00.780 **** 2025-09-04 01:09:36.415892 | orchestrator | 2025-09-04 01:09:36.415901 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-04 01:09:36.415911 | orchestrator | Thursday 04 September 2025 01:08:18 +0000 (0:00:00.134) 0:01:00.915 **** 2025-09-04 01:09:36.415920 | orchestrator | 2025-09-04 01:09:36.415935 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-04 01:09:36.415945 | orchestrator | Thursday 04 September 2025 01:08:18 +0000 (0:00:00.145) 0:01:01.060 **** 2025-09-04 01:09:36.415954 | orchestrator | 2025-09-04 01:09:36.415963 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-04 01:09:36.415972 | orchestrator | Thursday 04 September 2025 01:08:18 +0000 (0:00:00.673) 0:01:01.734 **** 2025-09-04 01:09:36.415982 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.415991 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.416001 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:36.416010 | orchestrator | 2025-09-04 01:09:36.416020 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-04 01:09:36.416029 | orchestrator | Thursday 04 September 2025 01:08:20 +0000 (0:00:02.057) 0:01:03.791 **** 2025-09-04 01:09:36.416039 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.416048 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.416057 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-04 01:09:36.416073 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-04 01:09:36.416083 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-04 01:09:36.416092 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:36.416102 | orchestrator | 2025-09-04 01:09:36.416111 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-04 01:09:36.416121 | orchestrator | Thursday 04 September 2025 01:08:59 +0000 (0:00:38.609) 0:01:42.401 **** 2025-09-04 01:09:36.416130 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.416139 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:36.416149 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:36.416158 | orchestrator | 2025-09-04 01:09:36.416168 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-04 01:09:36.416177 | orchestrator | Thursday 04 September 2025 01:09:29 +0000 (0:00:29.896) 0:02:12.297 **** 2025-09-04 01:09:36.416187 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:36.416196 | orchestrator | 2025-09-04 01:09:36.416205 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-04 01:09:36.416215 | orchestrator | Thursday 04 September 2025 01:09:31 +0000 (0:00:02.272) 0:02:14.570 **** 2025-09-04 01:09:36.416224 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.416233 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:36.416243 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:36.416252 | orchestrator | 2025-09-04 01:09:36.416261 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-04 01:09:36.416271 | orchestrator | Thursday 04 September 2025 01:09:32 +0000 (0:00:00.498) 0:02:15.069 **** 2025-09-04 01:09:36.416280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-04 01:09:36.416292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-04 01:09:36.416302 | orchestrator | 2025-09-04 01:09:36.416316 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-04 01:09:36.416326 | orchestrator | Thursday 04 September 2025 01:09:34 +0000 (0:00:02.366) 0:02:17.435 **** 2025-09-04 01:09:36.416335 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:36.416345 | orchestrator | 2025-09-04 01:09:36.416354 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:09:36.416364 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:09:36.416374 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:09:36.416384 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:09:36.416393 | orchestrator | 2025-09-04 01:09:36.416402 | orchestrator | 2025-09-04 01:09:36.416412 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:09:36.416421 | orchestrator | Thursday 04 September 2025 01:09:34 +0000 (0:00:00.275) 0:02:17.711 **** 2025-09-04 01:09:36.416431 | orchestrator | =============================================================================== 2025-09-04 01:09:36.416440 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.89s 2025-09-04 01:09:36.416456 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.61s 2025-09-04 01:09:36.416465 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.90s 2025-09-04 01:09:36.416474 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.37s 2025-09-04 01:09:36.416484 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.27s 2025-09-04 01:09:36.416493 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.23s 2025-09-04 01:09:36.416508 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.20s 2025-09-04 01:09:36.416518 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.06s 2025-09-04 01:09:36.416527 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.48s 2025-09-04 01:09:36.416536 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.46s 2025-09-04 01:09:36.416546 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.40s 2025-09-04 01:09:36.416555 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2025-09-04 01:09:36.416564 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.34s 2025-09-04 01:09:36.416574 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2025-09-04 01:09:36.416583 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2025-09-04 01:09:36.416592 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.00s 2025-09-04 01:09:36.416602 | orchestrator | grafana : Flush handlers ------------------------------------------------ 0.95s 2025-09-04 01:09:36.416611 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.87s 2025-09-04 01:09:36.416620 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.86s 2025-09-04 01:09:36.416630 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.76s 2025-09-04 01:09:36.416639 | orchestrator | 2025-09-04 01:09:36 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:36.416649 | orchestrator | 2025-09-04 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:39.460525 | orchestrator | 2025-09-04 01:09:39 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:39.463102 | orchestrator | 2025-09-04 01:09:39 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:39.463148 | orchestrator | 2025-09-04 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:42.509639 | orchestrator | 2025-09-04 01:09:42 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:42.512446 | orchestrator | 2025-09-04 01:09:42 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:42.512483 | orchestrator | 2025-09-04 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:45.564090 | orchestrator | 2025-09-04 01:09:45 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:45.567196 | orchestrator | 2025-09-04 01:09:45 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state STARTED 2025-09-04 01:09:45.567238 | orchestrator | 2025-09-04 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:48.625352 | orchestrator | 2025-09-04 01:09:48 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:48.629997 | orchestrator | 2025-09-04 01:09:48 | INFO  | Task 9c4298f1-5690-458f-8646-4696770688c4 is in state SUCCESS 2025-09-04 01:09:48.632981 | orchestrator | 2025-09-04 01:09:48.633016 | orchestrator | 2025-09-04 01:09:48.633029 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:09:48.633069 | orchestrator | 2025-09-04 01:09:48.633096 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-04 01:09:48.633108 | orchestrator | Thursday 04 September 2025 01:00:39 +0000 (0:00:00.278) 0:00:00.278 **** 2025-09-04 01:09:48.633119 | orchestrator | changed: [testbed-manager] 2025-09-04 01:09:48.633131 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.633142 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.633153 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.633163 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.633215 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.633226 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.633237 | orchestrator | 2025-09-04 01:09:48.633247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:09:48.633258 | orchestrator | Thursday 04 September 2025 01:00:39 +0000 (0:00:00.659) 0:00:00.938 **** 2025-09-04 01:09:48.633448 | orchestrator | changed: [testbed-manager] 2025-09-04 01:09:48.633497 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.634054 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.634077 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.634088 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.635555 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.635583 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.635594 | orchestrator | 2025-09-04 01:09:48.635606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:09:48.635617 | orchestrator | Thursday 04 September 2025 01:00:40 +0000 (0:00:00.647) 0:00:01.586 **** 2025-09-04 01:09:48.635628 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-04 01:09:48.635640 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-04 01:09:48.635652 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-04 01:09:48.635662 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-04 01:09:48.635673 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-04 01:09:48.635684 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-04 01:09:48.635695 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-04 01:09:48.635706 | orchestrator | 2025-09-04 01:09:48.635717 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-04 01:09:48.635728 | orchestrator | 2025-09-04 01:09:48.635763 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-04 01:09:48.635774 | orchestrator | Thursday 04 September 2025 01:00:41 +0000 (0:00:00.888) 0:00:02.474 **** 2025-09-04 01:09:48.635785 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.635796 | orchestrator | 2025-09-04 01:09:48.635806 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-04 01:09:48.635817 | orchestrator | Thursday 04 September 2025 01:00:42 +0000 (0:00:01.644) 0:00:04.119 **** 2025-09-04 01:09:48.635829 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-04 01:09:48.635840 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-04 01:09:48.635851 | orchestrator | 2025-09-04 01:09:48.635861 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-04 01:09:48.635872 | orchestrator | Thursday 04 September 2025 01:00:46 +0000 (0:00:03.783) 0:00:07.902 **** 2025-09-04 01:09:48.635883 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 01:09:48.635893 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-04 01:09:48.635904 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.635915 | orchestrator | 2025-09-04 01:09:48.635925 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-04 01:09:48.635936 | orchestrator | Thursday 04 September 2025 01:00:50 +0000 (0:00:03.573) 0:00:11.476 **** 2025-09-04 01:09:48.635947 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.635958 | orchestrator | 2025-09-04 01:09:48.635983 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-04 01:09:48.635995 | orchestrator | Thursday 04 September 2025 01:00:51 +0000 (0:00:00.824) 0:00:12.300 **** 2025-09-04 01:09:48.636005 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.636016 | orchestrator | 2025-09-04 01:09:48.636027 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-04 01:09:48.636038 | orchestrator | Thursday 04 September 2025 01:00:52 +0000 (0:00:01.707) 0:00:14.008 **** 2025-09-04 01:09:48.636049 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.636059 | orchestrator | 2025-09-04 01:09:48.636070 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-04 01:09:48.636081 | orchestrator | Thursday 04 September 2025 01:00:56 +0000 (0:00:03.195) 0:00:17.204 **** 2025-09-04 01:09:48.636091 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.636102 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636113 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636124 | orchestrator | 2025-09-04 01:09:48.636134 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-04 01:09:48.636145 | orchestrator | Thursday 04 September 2025 01:00:56 +0000 (0:00:00.511) 0:00:17.715 **** 2025-09-04 01:09:48.636156 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.636167 | orchestrator | 2025-09-04 01:09:48.636177 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-04 01:09:48.636188 | orchestrator | Thursday 04 September 2025 01:01:25 +0000 (0:00:29.264) 0:00:46.980 **** 2025-09-04 01:09:48.636199 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.636210 | orchestrator | 2025-09-04 01:09:48.636220 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-04 01:09:48.636231 | orchestrator | Thursday 04 September 2025 01:01:40 +0000 (0:00:14.524) 0:01:01.505 **** 2025-09-04 01:09:48.636242 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.636252 | orchestrator | 2025-09-04 01:09:48.636263 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-04 01:09:48.636274 | orchestrator | Thursday 04 September 2025 01:01:52 +0000 (0:00:12.593) 0:01:14.098 **** 2025-09-04 01:09:48.636376 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.636392 | orchestrator | 2025-09-04 01:09:48.636411 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-04 01:09:48.636422 | orchestrator | Thursday 04 September 2025 01:01:54 +0000 (0:00:01.093) 0:01:15.191 **** 2025-09-04 01:09:48.636433 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.636443 | orchestrator | 2025-09-04 01:09:48.636454 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-04 01:09:48.636465 | orchestrator | Thursday 04 September 2025 01:01:54 +0000 (0:00:00.437) 0:01:15.629 **** 2025-09-04 01:09:48.636476 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.636487 | orchestrator | 2025-09-04 01:09:48.636498 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-04 01:09:48.636509 | orchestrator | Thursday 04 September 2025 01:01:54 +0000 (0:00:00.519) 0:01:16.149 **** 2025-09-04 01:09:48.636519 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.636530 | orchestrator | 2025-09-04 01:09:48.636541 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-04 01:09:48.636551 | orchestrator | Thursday 04 September 2025 01:02:13 +0000 (0:00:18.347) 0:01:34.497 **** 2025-09-04 01:09:48.636562 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.636573 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636583 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636594 | orchestrator | 2025-09-04 01:09:48.636605 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-04 01:09:48.636615 | orchestrator | 2025-09-04 01:09:48.636626 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-04 01:09:48.636637 | orchestrator | Thursday 04 September 2025 01:02:13 +0000 (0:00:00.314) 0:01:34.811 **** 2025-09-04 01:09:48.636656 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.636666 | orchestrator | 2025-09-04 01:09:48.636677 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-04 01:09:48.636688 | orchestrator | Thursday 04 September 2025 01:02:14 +0000 (0:00:00.567) 0:01:35.378 **** 2025-09-04 01:09:48.636698 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636709 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636720 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.636730 | orchestrator | 2025-09-04 01:09:48.636770 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-04 01:09:48.636782 | orchestrator | Thursday 04 September 2025 01:02:16 +0000 (0:00:02.112) 0:01:37.491 **** 2025-09-04 01:09:48.636792 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636803 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636814 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.636825 | orchestrator | 2025-09-04 01:09:48.636835 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-04 01:09:48.636846 | orchestrator | Thursday 04 September 2025 01:02:18 +0000 (0:00:02.404) 0:01:39.895 **** 2025-09-04 01:09:48.636857 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.636867 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636878 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636889 | orchestrator | 2025-09-04 01:09:48.636899 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-04 01:09:48.636910 | orchestrator | Thursday 04 September 2025 01:02:19 +0000 (0:00:01.133) 0:01:41.028 **** 2025-09-04 01:09:48.636921 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-04 01:09:48.636951 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.636963 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-04 01:09:48.636974 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.636984 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-04 01:09:48.636995 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-04 01:09:48.637006 | orchestrator | 2025-09-04 01:09:48.637016 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-04 01:09:48.637027 | orchestrator | Thursday 04 September 2025 01:02:28 +0000 (0:00:09.067) 0:01:50.096 **** 2025-09-04 01:09:48.637038 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.637048 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637059 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637069 | orchestrator | 2025-09-04 01:09:48.637080 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-04 01:09:48.637091 | orchestrator | Thursday 04 September 2025 01:02:29 +0000 (0:00:00.626) 0:01:50.723 **** 2025-09-04 01:09:48.637101 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-04 01:09:48.637112 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-04 01:09:48.637122 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.637133 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637143 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-04 01:09:48.637154 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637164 | orchestrator | 2025-09-04 01:09:48.637175 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-04 01:09:48.637186 | orchestrator | Thursday 04 September 2025 01:02:30 +0000 (0:00:01.080) 0:01:51.803 **** 2025-09-04 01:09:48.637196 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637207 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637218 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.637228 | orchestrator | 2025-09-04 01:09:48.637239 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-04 01:09:48.637249 | orchestrator | Thursday 04 September 2025 01:02:31 +0000 (0:00:00.674) 0:01:52.478 **** 2025-09-04 01:09:48.637267 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637278 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.637288 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637299 | orchestrator | 2025-09-04 01:09:48.637310 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-04 01:09:48.637320 | orchestrator | Thursday 04 September 2025 01:02:32 +0000 (0:00:01.310) 0:01:53.788 **** 2025-09-04 01:09:48.637331 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637342 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637435 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.637452 | orchestrator | 2025-09-04 01:09:48.637469 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-04 01:09:48.637480 | orchestrator | Thursday 04 September 2025 01:02:35 +0000 (0:00:03.242) 0:01:57.030 **** 2025-09-04 01:09:48.637491 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637501 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637512 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.637523 | orchestrator | 2025-09-04 01:09:48.637534 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-04 01:09:48.637544 | orchestrator | Thursday 04 September 2025 01:02:57 +0000 (0:00:21.416) 0:02:18.446 **** 2025-09-04 01:09:48.637555 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637566 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637577 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.637588 | orchestrator | 2025-09-04 01:09:48.637598 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-04 01:09:48.637609 | orchestrator | Thursday 04 September 2025 01:03:09 +0000 (0:00:11.725) 0:02:30.172 **** 2025-09-04 01:09:48.637620 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.637630 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637641 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637651 | orchestrator | 2025-09-04 01:09:48.637662 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-04 01:09:48.637673 | orchestrator | Thursday 04 September 2025 01:03:10 +0000 (0:00:01.100) 0:02:31.273 **** 2025-09-04 01:09:48.637684 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637694 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637705 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.637715 | orchestrator | 2025-09-04 01:09:48.637726 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-04 01:09:48.637754 | orchestrator | Thursday 04 September 2025 01:03:21 +0000 (0:00:11.177) 0:02:42.450 **** 2025-09-04 01:09:48.637766 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.637776 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637787 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637798 | orchestrator | 2025-09-04 01:09:48.637808 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-04 01:09:48.637819 | orchestrator | Thursday 04 September 2025 01:03:22 +0000 (0:00:01.064) 0:02:43.515 **** 2025-09-04 01:09:48.637830 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.637840 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.637851 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.637862 | orchestrator | 2025-09-04 01:09:48.637873 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-04 01:09:48.637883 | orchestrator | 2025-09-04 01:09:48.637894 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-04 01:09:48.637905 | orchestrator | Thursday 04 September 2025 01:03:22 +0000 (0:00:00.494) 0:02:44.010 **** 2025-09-04 01:09:48.637916 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.637928 | orchestrator | 2025-09-04 01:09:48.637938 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-04 01:09:48.637949 | orchestrator | Thursday 04 September 2025 01:03:23 +0000 (0:00:00.536) 0:02:44.546 **** 2025-09-04 01:09:48.637967 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-04 01:09:48.637978 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-04 01:09:48.637989 | orchestrator | 2025-09-04 01:09:48.638000 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-04 01:09:48.638011 | orchestrator | Thursday 04 September 2025 01:03:26 +0000 (0:00:03.287) 0:02:47.833 **** 2025-09-04 01:09:48.638057 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-04 01:09:48.638072 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-04 01:09:48.638085 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-04 01:09:48.638099 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-04 01:09:48.638112 | orchestrator | 2025-09-04 01:09:48.638124 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-04 01:09:48.638136 | orchestrator | Thursday 04 September 2025 01:03:33 +0000 (0:00:06.887) 0:02:54.721 **** 2025-09-04 01:09:48.638149 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:09:48.638161 | orchestrator | 2025-09-04 01:09:48.638175 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-04 01:09:48.638187 | orchestrator | Thursday 04 September 2025 01:03:36 +0000 (0:00:03.164) 0:02:57.886 **** 2025-09-04 01:09:48.638200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:09:48.638213 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-04 01:09:48.638225 | orchestrator | 2025-09-04 01:09:48.638239 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-04 01:09:48.638252 | orchestrator | Thursday 04 September 2025 01:03:40 +0000 (0:00:04.189) 0:03:02.076 **** 2025-09-04 01:09:48.638265 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:09:48.638277 | orchestrator | 2025-09-04 01:09:48.638290 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-04 01:09:48.638302 | orchestrator | Thursday 04 September 2025 01:03:44 +0000 (0:00:03.609) 0:03:05.685 **** 2025-09-04 01:09:48.638316 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-04 01:09:48.638329 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-04 01:09:48.638341 | orchestrator | 2025-09-04 01:09:48.638354 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-04 01:09:48.638444 | orchestrator | Thursday 04 September 2025 01:03:52 +0000 (0:00:07.988) 0:03:13.674 **** 2025-09-04 01:09:48.638471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.638500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.638514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.638570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.638586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.638597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.638615 | orchestrator | 2025-09-04 01:09:48.638626 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-04 01:09:48.638637 | orchestrator | Thursday 04 September 2025 01:03:53 +0000 (0:00:01.436) 0:03:15.110 **** 2025-09-04 01:09:48.638648 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.638659 | orchestrator | 2025-09-04 01:09:48.638669 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-04 01:09:48.638680 | orchestrator | Thursday 04 September 2025 01:03:54 +0000 (0:00:00.175) 0:03:15.285 **** 2025-09-04 01:09:48.638691 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.638701 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.638712 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.638723 | orchestrator | 2025-09-04 01:09:48.638754 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-04 01:09:48.638766 | orchestrator | Thursday 04 September 2025 01:03:54 +0000 (0:00:00.508) 0:03:15.794 **** 2025-09-04 01:09:48.638776 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-04 01:09:48.638787 | orchestrator | 2025-09-04 01:09:48.638798 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-04 01:09:48.638808 | orchestrator | Thursday 04 September 2025 01:03:55 +0000 (0:00:01.137) 0:03:16.931 **** 2025-09-04 01:09:48.638819 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.638829 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.638840 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.638850 | orchestrator | 2025-09-04 01:09:48.638861 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-04 01:09:48.638871 | orchestrator | Thursday 04 September 2025 01:03:56 +0000 (0:00:00.559) 0:03:17.491 **** 2025-09-04 01:09:48.638882 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.638893 | orchestrator | 2025-09-04 01:09:48.638904 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-04 01:09:48.638914 | orchestrator | Thursday 04 September 2025 01:03:56 +0000 (0:00:00.560) 0:03:18.051 **** 2025-09-04 01:09:48.638926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.638979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639084 | orchestrator | 2025-09-04 01:09:48.639102 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-04 01:09:48.639121 | orchestrator | Thursday 04 September 2025 01:03:59 +0000 (0:00:02.950) 0:03:21.001 **** 2025-09-04 01:09:48.639134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639162 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.639177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639204 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.639256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639292 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.639325 | orchestrator | 2025-09-04 01:09:48.639338 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-04 01:09:48.639351 | orchestrator | Thursday 04 September 2025 01:04:01 +0000 (0:00:02.075) 0:03:23.077 **** 2025-09-04 01:09:48.639366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639394 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.639447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639481 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.639493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.639505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.639516 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.639527 | orchestrator | 2025-09-04 01:09:48.639538 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-04 01:09:48.639549 | orchestrator | Thursday 04 September 2025 01:04:03 +0000 (0:00:01.144) 0:03:24.221 **** 2025-09-04 01:09:48.639599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639807 | orchestrator | 2025-09-04 01:09:48.639818 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-04 01:09:48.639829 | orchestrator | Thursday 04 September 2025 01:04:05 +0000 (0:00:02.745) 0:03:26.967 **** 2025-09-04 01:09:48.639841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.639931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.639966 | orchestrator | 2025-09-04 01:09:48.639977 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-04 01:09:48.639988 | orchestrator | Thursday 04 September 2025 01:04:15 +0000 (0:00:09.897) 0:03:36.864 **** 2025-09-04 01:09:48.639999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.640046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.640065 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.640077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.640089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.640100 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.640111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-04 01:09:48.640130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.640141 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.640152 | orchestrator | 2025-09-04 01:09:48.640162 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-04 01:09:48.640173 | orchestrator | Thursday 04 September 2025 01:04:16 +0000 (0:00:01.079) 0:03:37.943 **** 2025-09-04 01:09:48.640184 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.640195 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.640206 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.640216 | orchestrator | 2025-09-04 01:09:48.640256 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-04 01:09:48.640273 | orchestrator | Thursday 04 September 2025 01:04:18 +0000 (0:00:01.696) 0:03:39.640 **** 2025-09-04 01:09:48.640284 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.640295 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.640305 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.640314 | orchestrator | 2025-09-04 01:09:48.640324 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-04 01:09:48.640333 | orchestrator | Thursday 04 September 2025 01:04:19 +0000 (0:00:00.543) 0:03:40.184 **** 2025-09-04 01:09:48.640343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.640354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.640397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-04 01:09:48.640414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.640425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.640436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.640446 | orchestrator | 2025-09-04 01:09:48.640455 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-04 01:09:48.640465 | orchestrator | Thursday 04 September 2025 01:04:22 +0000 (0:00:03.177) 0:03:43.362 **** 2025-09-04 01:09:48.640475 | orchestrator | 2025-09-04 01:09:48.640484 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-04 01:09:48.640499 | orchestrator | Thursday 04 September 2025 01:04:22 +0000 (0:00:00.275) 0:03:43.637 **** 2025-09-04 01:09:48.640509 | orchestrator | 2025-09-04 01:09:48.640519 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-04 01:09:48.640528 | orchestrator | Thursday 04 September 2025 01:04:22 +0000 (0:00:00.303) 0:03:43.941 **** 2025-09-04 01:09:48.640538 | orchestrator | 2025-09-04 01:09:48.640547 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-04 01:09:48.640557 | orchestrator | Thursday 04 September 2025 01:04:23 +0000 (0:00:00.348) 0:03:44.290 **** 2025-09-04 01:09:48.640566 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.640575 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.640585 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.640594 | orchestrator | 2025-09-04 01:09:48.640604 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-04 01:09:48.640613 | orchestrator | Thursday 04 September 2025 01:04:44 +0000 (0:00:21.249) 0:04:05.539 **** 2025-09-04 01:09:48.640623 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.640632 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.640641 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.640651 | orchestrator | 2025-09-04 01:09:48.640660 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-04 01:09:48.640669 | orchestrator | 2025-09-04 01:09:48.640679 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-04 01:09:48.640688 | orchestrator | Thursday 04 September 2025 01:04:57 +0000 (0:00:12.660) 0:04:18.199 **** 2025-09-04 01:09:48.640698 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.640708 | orchestrator | 2025-09-04 01:09:48.640717 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-04 01:09:48.640727 | orchestrator | Thursday 04 September 2025 01:04:58 +0000 (0:00:01.410) 0:04:19.610 **** 2025-09-04 01:09:48.640777 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.640788 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.640798 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.640807 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.640817 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.640826 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.640835 | orchestrator | 2025-09-04 01:09:48.640845 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-04 01:09:48.640855 | orchestrator | Thursday 04 September 2025 01:04:59 +0000 (0:00:01.188) 0:04:20.799 **** 2025-09-04 01:09:48.640864 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.640874 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.640883 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.640893 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:09:48.640902 | orchestrator | 2025-09-04 01:09:48.640912 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-04 01:09:48.640951 | orchestrator | Thursday 04 September 2025 01:05:01 +0000 (0:00:01.867) 0:04:22.666 **** 2025-09-04 01:09:48.640962 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-04 01:09:48.640976 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-04 01:09:48.640986 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-04 01:09:48.640996 | orchestrator | 2025-09-04 01:09:48.641005 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-04 01:09:48.641014 | orchestrator | Thursday 04 September 2025 01:05:02 +0000 (0:00:00.897) 0:04:23.563 **** 2025-09-04 01:09:48.641024 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-04 01:09:48.641033 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-04 01:09:48.641041 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-04 01:09:48.641054 | orchestrator | 2025-09-04 01:09:48.641062 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-04 01:09:48.641070 | orchestrator | Thursday 04 September 2025 01:05:03 +0000 (0:00:01.583) 0:04:25.147 **** 2025-09-04 01:09:48.641078 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-04 01:09:48.641086 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.641093 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-04 01:09:48.641101 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.641108 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-04 01:09:48.641116 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.641124 | orchestrator | 2025-09-04 01:09:48.641131 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-04 01:09:48.641139 | orchestrator | Thursday 04 September 2025 01:05:04 +0000 (0:00:00.984) 0:04:26.132 **** 2025-09-04 01:09:48.641147 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 01:09:48.641155 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 01:09:48.641163 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.641171 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-04 01:09:48.641178 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-04 01:09:48.641186 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-04 01:09:48.641194 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 01:09:48.641202 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 01:09:48.641209 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.641217 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-04 01:09:48.641225 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-04 01:09:48.641233 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.641240 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-04 01:09:48.641248 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-04 01:09:48.641256 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-04 01:09:48.641263 | orchestrator | 2025-09-04 01:09:48.641271 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-04 01:09:48.641279 | orchestrator | Thursday 04 September 2025 01:05:06 +0000 (0:00:01.521) 0:04:27.653 **** 2025-09-04 01:09:48.641287 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.641294 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.641302 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.641310 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.641317 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.641325 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.641332 | orchestrator | 2025-09-04 01:09:48.641340 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-04 01:09:48.641348 | orchestrator | Thursday 04 September 2025 01:05:08 +0000 (0:00:01.644) 0:04:29.298 **** 2025-09-04 01:09:48.641356 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.641363 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.641371 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.641378 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.641386 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.641394 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.641401 | orchestrator | 2025-09-04 01:09:48.641409 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-04 01:09:48.641417 | orchestrator | Thursday 04 September 2025 01:05:10 +0000 (0:00:02.271) 0:04:31.569 **** 2025-09-04 01:09:48.641430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641650 | orchestrator | 2025-09-04 01:09:48.641658 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-04 01:09:48.641666 | orchestrator | Thursday 04 September 2025 01:05:14 +0000 (0:00:04.165) 0:04:35.735 **** 2025-09-04 01:09:48.641675 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:09:48.641683 | orchestrator | 2025-09-04 01:09:48.641691 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-04 01:09:48.641699 | orchestrator | Thursday 04 September 2025 01:05:16 +0000 (0:00:01.998) 0:04:37.733 **** 2025-09-04 01:09:48.641707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.641918 | orchestrator | 2025-09-04 01:09:48.641926 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-04 01:09:48.641934 | orchestrator | Thursday 04 September 2025 01:05:21 +0000 (0:00:05.227) 0:04:42.961 **** 2025-09-04 01:09:48.641968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.641978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.641986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.641999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642008 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.642053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642063 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.642099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.642109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.642118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.642126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642142 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.642151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.642159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642167 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.642202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642220 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.642228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642250 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.642258 | orchestrator | 2025-09-04 01:09:48.642266 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-04 01:09:48.642274 | orchestrator | Thursday 04 September 2025 01:05:23 +0000 (0:00:01.670) 0:04:44.631 **** 2025-09-04 01:09:48.642283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.642291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.642326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642335 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.642344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.642357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.642365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642373 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.642381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.642415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.642425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642433 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.642441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642465 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.642473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642490 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.642498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.642532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.642542 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.642550 | orchestrator | 2025-09-04 01:09:48.642558 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-04 01:09:48.642566 | orchestrator | Thursday 04 September 2025 01:05:25 +0000 (0:00:01.852) 0:04:46.484 **** 2025-09-04 01:09:48.642579 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.642586 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.642594 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.642602 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-04 01:09:48.642610 | orchestrator | 2025-09-04 01:09:48.642617 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-04 01:09:48.642625 | orchestrator | Thursday 04 September 2025 01:05:26 +0000 (0:00:00.838) 0:04:47.323 **** 2025-09-04 01:09:48.642633 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 01:09:48.642641 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-04 01:09:48.642648 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-04 01:09:48.642656 | orchestrator | 2025-09-04 01:09:48.642664 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-04 01:09:48.642672 | orchestrator | Thursday 04 September 2025 01:05:27 +0000 (0:00:00.844) 0:04:48.167 **** 2025-09-04 01:09:48.642679 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 01:09:48.642687 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-04 01:09:48.642695 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-04 01:09:48.642703 | orchestrator | 2025-09-04 01:09:48.642710 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-04 01:09:48.642718 | orchestrator | Thursday 04 September 2025 01:05:27 +0000 (0:00:00.863) 0:04:49.030 **** 2025-09-04 01:09:48.642726 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:09:48.642748 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:09:48.642757 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:09:48.642764 | orchestrator | 2025-09-04 01:09:48.642772 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-04 01:09:48.642780 | orchestrator | Thursday 04 September 2025 01:05:28 +0000 (0:00:00.428) 0:04:49.459 **** 2025-09-04 01:09:48.642788 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:09:48.642795 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:09:48.642803 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:09:48.642810 | orchestrator | 2025-09-04 01:09:48.642818 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-04 01:09:48.642826 | orchestrator | Thursday 04 September 2025 01:05:28 +0000 (0:00:00.589) 0:04:50.049 **** 2025-09-04 01:09:48.642834 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-04 01:09:48.642842 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-04 01:09:48.642849 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-04 01:09:48.642857 | orchestrator | 2025-09-04 01:09:48.642865 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-04 01:09:48.642873 | orchestrator | Thursday 04 September 2025 01:05:30 +0000 (0:00:01.606) 0:04:51.655 **** 2025-09-04 01:09:48.642880 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-04 01:09:48.642888 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-04 01:09:48.642896 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-04 01:09:48.642903 | orchestrator | 2025-09-04 01:09:48.642911 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-04 01:09:48.642919 | orchestrator | Thursday 04 September 2025 01:05:31 +0000 (0:00:01.262) 0:04:52.918 **** 2025-09-04 01:09:48.642927 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-04 01:09:48.642934 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-04 01:09:48.642942 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-04 01:09:48.642950 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-04 01:09:48.642958 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-04 01:09:48.642965 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-04 01:09:48.642973 | orchestrator | 2025-09-04 01:09:48.642981 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-04 01:09:48.642994 | orchestrator | Thursday 04 September 2025 01:05:35 +0000 (0:00:04.013) 0:04:56.932 **** 2025-09-04 01:09:48.643002 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643010 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.643018 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.643025 | orchestrator | 2025-09-04 01:09:48.643033 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-04 01:09:48.643041 | orchestrator | Thursday 04 September 2025 01:05:36 +0000 (0:00:00.547) 0:04:57.480 **** 2025-09-04 01:09:48.643049 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643056 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.643064 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.643072 | orchestrator | 2025-09-04 01:09:48.643080 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-04 01:09:48.643087 | orchestrator | Thursday 04 September 2025 01:05:36 +0000 (0:00:00.354) 0:04:57.834 **** 2025-09-04 01:09:48.643095 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.643103 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.643111 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.643118 | orchestrator | 2025-09-04 01:09:48.643150 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-04 01:09:48.643163 | orchestrator | Thursday 04 September 2025 01:05:37 +0000 (0:00:01.258) 0:04:59.093 **** 2025-09-04 01:09:48.643172 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-04 01:09:48.643180 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-04 01:09:48.643188 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-04 01:09:48.643196 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-04 01:09:48.643204 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-04 01:09:48.643212 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-04 01:09:48.643220 | orchestrator | 2025-09-04 01:09:48.643228 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-04 01:09:48.643236 | orchestrator | Thursday 04 September 2025 01:05:41 +0000 (0:00:03.358) 0:05:02.452 **** 2025-09-04 01:09:48.643243 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 01:09:48.643251 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 01:09:48.643259 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 01:09:48.643267 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-04 01:09:48.643275 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.643282 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-04 01:09:48.643290 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.643298 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-04 01:09:48.643305 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.643313 | orchestrator | 2025-09-04 01:09:48.643321 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-04 01:09:48.643329 | orchestrator | Thursday 04 September 2025 01:05:44 +0000 (0:00:03.645) 0:05:06.098 **** 2025-09-04 01:09:48.643336 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643344 | orchestrator | 2025-09-04 01:09:48.643352 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-04 01:09:48.643360 | orchestrator | Thursday 04 September 2025 01:05:45 +0000 (0:00:00.137) 0:05:06.235 **** 2025-09-04 01:09:48.643368 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643381 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.643389 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.643396 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.643404 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.643412 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.643420 | orchestrator | 2025-09-04 01:09:48.643427 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-04 01:09:48.643435 | orchestrator | Thursday 04 September 2025 01:05:45 +0000 (0:00:00.576) 0:05:06.812 **** 2025-09-04 01:09:48.643443 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-04 01:09:48.643451 | orchestrator | 2025-09-04 01:09:48.643458 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-04 01:09:48.643466 | orchestrator | Thursday 04 September 2025 01:05:46 +0000 (0:00:00.694) 0:05:07.507 **** 2025-09-04 01:09:48.643474 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643482 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.643489 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.643497 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.643505 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.643512 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.643520 | orchestrator | 2025-09-04 01:09:48.643528 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-04 01:09:48.643535 | orchestrator | Thursday 04 September 2025 01:05:47 +0000 (0:00:00.756) 0:05:08.263 **** 2025-09-04 01:09:48.643543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643709 | orchestrator | 2025-09-04 01:09:48.643717 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-04 01:09:48.643725 | orchestrator | Thursday 04 September 2025 01:05:50 +0000 (0:00:03.761) 0:05:12.025 **** 2025-09-04 01:09:48.643768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.643779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.643788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.643796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.643815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.643834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.643843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.643934 | orchestrator | 2025-09-04 01:09:48.643943 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-04 01:09:48.643951 | orchestrator | Thursday 04 September 2025 01:05:57 +0000 (0:00:06.392) 0:05:18.418 **** 2025-09-04 01:09:48.643959 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.643967 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.643975 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.643983 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.643991 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.643998 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644005 | orchestrator | 2025-09-04 01:09:48.644012 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-04 01:09:48.644019 | orchestrator | Thursday 04 September 2025 01:05:59 +0000 (0:00:02.130) 0:05:20.548 **** 2025-09-04 01:09:48.644026 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-04 01:09:48.644032 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-04 01:09:48.644039 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-04 01:09:48.644046 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-04 01:09:48.644063 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644073 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-04 01:09:48.644080 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-04 01:09:48.644087 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-04 01:09:48.644094 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644101 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-04 01:09:48.644108 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-04 01:09:48.644114 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644121 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-04 01:09:48.644128 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-04 01:09:48.644135 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-04 01:09:48.644142 | orchestrator | 2025-09-04 01:09:48.644149 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-04 01:09:48.644156 | orchestrator | Thursday 04 September 2025 01:06:04 +0000 (0:00:04.922) 0:05:25.471 **** 2025-09-04 01:09:48.644163 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.644170 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.644176 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.644183 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644190 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644197 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644204 | orchestrator | 2025-09-04 01:09:48.644211 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-04 01:09:48.644217 | orchestrator | Thursday 04 September 2025 01:06:04 +0000 (0:00:00.556) 0:05:26.027 **** 2025-09-04 01:09:48.644224 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-04 01:09:48.644231 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-04 01:09:48.644238 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-04 01:09:48.644245 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-04 01:09:48.644252 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-04 01:09:48.644259 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644266 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644272 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-04 01:09:48.644279 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644286 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644293 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644300 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644306 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644313 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-04 01:09:48.644325 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644331 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644338 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644345 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644352 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644359 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644365 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-04 01:09:48.644372 | orchestrator | 2025-09-04 01:09:48.644379 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-04 01:09:48.644386 | orchestrator | Thursday 04 September 2025 01:06:10 +0000 (0:00:05.439) 0:05:31.467 **** 2025-09-04 01:09:48.644393 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 01:09:48.644400 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 01:09:48.644410 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-04 01:09:48.644420 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:09:48.644427 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:09:48.644434 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-04 01:09:48.644441 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-04 01:09:48.644448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-04 01:09:48.644455 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 01:09:48.644461 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 01:09:48.644468 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-04 01:09:48.644475 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:09:48.644482 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-04 01:09:48.644488 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644495 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-04 01:09:48.644502 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644509 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:09:48.644516 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-04 01:09:48.644522 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-04 01:09:48.644529 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:09:48.644536 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-04 01:09:48.644543 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644549 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:09:48.644556 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-04 01:09:48.644563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:09:48.644575 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:09:48.644581 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-04 01:09:48.644588 | orchestrator | 2025-09-04 01:09:48.644595 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-04 01:09:48.644602 | orchestrator | Thursday 04 September 2025 01:06:20 +0000 (0:00:09.939) 0:05:41.407 **** 2025-09-04 01:09:48.644608 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.644615 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.644622 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.644629 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644636 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644642 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644649 | orchestrator | 2025-09-04 01:09:48.644656 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-04 01:09:48.644663 | orchestrator | Thursday 04 September 2025 01:06:21 +0000 (0:00:00.793) 0:05:42.200 **** 2025-09-04 01:09:48.644670 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.644677 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.644683 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.644690 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644697 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644703 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644710 | orchestrator | 2025-09-04 01:09:48.644717 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-04 01:09:48.644724 | orchestrator | Thursday 04 September 2025 01:06:21 +0000 (0:00:00.595) 0:05:42.796 **** 2025-09-04 01:09:48.644731 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644752 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644758 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644765 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.644772 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.644778 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.644785 | orchestrator | 2025-09-04 01:09:48.644792 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-04 01:09:48.644798 | orchestrator | Thursday 04 September 2025 01:06:24 +0000 (0:00:02.716) 0:05:45.513 **** 2025-09-04 01:09:48.644814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.644822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.644829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644841 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.644848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.644855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.644862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644869 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.644883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-04 01:09:48.644891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-04 01:09:48.644903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644909 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.644917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.644924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644931 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.644938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.644953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-04 01:09:48.644971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-04 01:09:48.644978 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.644985 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.644992 | orchestrator | 2025-09-04 01:09:48.644999 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-04 01:09:48.645006 | orchestrator | Thursday 04 September 2025 01:06:25 +0000 (0:00:01.133) 0:05:46.647 **** 2025-09-04 01:09:48.645013 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-04 01:09:48.645020 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645026 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.645033 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-04 01:09:48.645040 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645047 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.645054 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-04 01:09:48.645060 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645067 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.645074 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-04 01:09:48.645081 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645087 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.645094 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-04 01:09:48.645101 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645108 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.645114 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-04 01:09:48.645121 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-04 01:09:48.645128 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.645134 | orchestrator | 2025-09-04 01:09:48.645141 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-04 01:09:48.645148 | orchestrator | Thursday 04 September 2025 01:06:26 +0000 (0:00:00.713) 0:05:47.361 **** 2025-09-04 01:09:48.645155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-04 01:09:48.645337 | orchestrator | 2025-09-04 01:09:48.645344 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-04 01:09:48.645351 | orchestrator | Thursday 04 September 2025 01:06:29 +0000 (0:00:02.948) 0:05:50.309 **** 2025-09-04 01:09:48.645358 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.645365 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.645372 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.645378 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.645385 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.645392 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.645398 | orchestrator | 2025-09-04 01:09:48.645405 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645412 | orchestrator | Thursday 04 September 2025 01:06:29 +0000 (0:00:00.761) 0:05:51.070 **** 2025-09-04 01:09:48.645418 | orchestrator | 2025-09-04 01:09:48.645425 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645432 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.137) 0:05:51.208 **** 2025-09-04 01:09:48.645438 | orchestrator | 2025-09-04 01:09:48.645445 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645452 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.134) 0:05:51.342 **** 2025-09-04 01:09:48.645458 | orchestrator | 2025-09-04 01:09:48.645465 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645472 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.139) 0:05:51.482 **** 2025-09-04 01:09:48.645478 | orchestrator | 2025-09-04 01:09:48.645485 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645492 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.158) 0:05:51.640 **** 2025-09-04 01:09:48.645499 | orchestrator | 2025-09-04 01:09:48.645505 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-04 01:09:48.645512 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.152) 0:05:51.793 **** 2025-09-04 01:09:48.645519 | orchestrator | 2025-09-04 01:09:48.645525 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-04 01:09:48.645532 | orchestrator | Thursday 04 September 2025 01:06:30 +0000 (0:00:00.316) 0:05:52.109 **** 2025-09-04 01:09:48.645543 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.645550 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.645556 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.645563 | orchestrator | 2025-09-04 01:09:48.645570 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-04 01:09:48.645576 | orchestrator | Thursday 04 September 2025 01:06:43 +0000 (0:00:12.539) 0:06:04.649 **** 2025-09-04 01:09:48.645583 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.645590 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.645597 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.645603 | orchestrator | 2025-09-04 01:09:48.645610 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-04 01:09:48.645617 | orchestrator | Thursday 04 September 2025 01:07:03 +0000 (0:00:19.572) 0:06:24.222 **** 2025-09-04 01:09:48.645623 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.645630 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.645637 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.645643 | orchestrator | 2025-09-04 01:09:48.645650 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-04 01:09:48.645657 | orchestrator | Thursday 04 September 2025 01:07:29 +0000 (0:00:26.178) 0:06:50.400 **** 2025-09-04 01:09:48.645664 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.645670 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.645677 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.645684 | orchestrator | 2025-09-04 01:09:48.645690 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-04 01:09:48.645697 | orchestrator | Thursday 04 September 2025 01:08:06 +0000 (0:00:36.787) 0:07:27.188 **** 2025-09-04 01:09:48.645704 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-04 01:09:48.645711 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-04 01:09:48.645718 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-04 01:09:48.645725 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.645731 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.645749 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.645756 | orchestrator | 2025-09-04 01:09:48.645767 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-04 01:09:48.645777 | orchestrator | Thursday 04 September 2025 01:08:12 +0000 (0:00:06.417) 0:07:33.605 **** 2025-09-04 01:09:48.645784 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.645791 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.645798 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.645805 | orchestrator | 2025-09-04 01:09:48.645812 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-04 01:09:48.645818 | orchestrator | Thursday 04 September 2025 01:08:13 +0000 (0:00:00.919) 0:07:34.525 **** 2025-09-04 01:09:48.645825 | orchestrator | changed: [testbed-node-3] 2025-09-04 01:09:48.645832 | orchestrator | changed: [testbed-node-4] 2025-09-04 01:09:48.645838 | orchestrator | changed: [testbed-node-5] 2025-09-04 01:09:48.645845 | orchestrator | 2025-09-04 01:09:48.645852 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-04 01:09:48.645859 | orchestrator | Thursday 04 September 2025 01:08:40 +0000 (0:00:27.524) 0:08:02.050 **** 2025-09-04 01:09:48.645866 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.645872 | orchestrator | 2025-09-04 01:09:48.645879 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-04 01:09:48.645886 | orchestrator | Thursday 04 September 2025 01:08:41 +0000 (0:00:00.148) 0:08:02.198 **** 2025-09-04 01:09:48.645893 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.645899 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.645906 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.645917 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.645924 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.645931 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-04 01:09:48.645938 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-04 01:09:48.645944 | orchestrator | 2025-09-04 01:09:48.645951 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-04 01:09:48.645958 | orchestrator | Thursday 04 September 2025 01:09:02 +0000 (0:00:21.923) 0:08:24.122 **** 2025-09-04 01:09:48.645965 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.645972 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.645978 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.645985 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.645992 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.645998 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.646005 | orchestrator | 2025-09-04 01:09:48.646012 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-04 01:09:48.646041 | orchestrator | Thursday 04 September 2025 01:09:11 +0000 (0:00:08.689) 0:08:32.811 **** 2025-09-04 01:09:48.646047 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.646054 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.646060 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646067 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.646074 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.646080 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-04 01:09:48.646087 | orchestrator | 2025-09-04 01:09:48.646094 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-04 01:09:48.646100 | orchestrator | Thursday 04 September 2025 01:09:14 +0000 (0:00:03.330) 0:08:36.141 **** 2025-09-04 01:09:48.646107 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-04 01:09:48.646114 | orchestrator | 2025-09-04 01:09:48.646120 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-04 01:09:48.646127 | orchestrator | Thursday 04 September 2025 01:09:26 +0000 (0:00:12.018) 0:08:48.160 **** 2025-09-04 01:09:48.646133 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-04 01:09:48.646140 | orchestrator | 2025-09-04 01:09:48.646147 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-04 01:09:48.646153 | orchestrator | Thursday 04 September 2025 01:09:28 +0000 (0:00:01.217) 0:08:49.378 **** 2025-09-04 01:09:48.646160 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.646166 | orchestrator | 2025-09-04 01:09:48.646173 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-04 01:09:48.646179 | orchestrator | Thursday 04 September 2025 01:09:29 +0000 (0:00:01.235) 0:08:50.614 **** 2025-09-04 01:09:48.646186 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-04 01:09:48.646192 | orchestrator | 2025-09-04 01:09:48.646199 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-04 01:09:48.646206 | orchestrator | Thursday 04 September 2025 01:09:40 +0000 (0:00:10.642) 0:09:01.257 **** 2025-09-04 01:09:48.646212 | orchestrator | ok: [testbed-node-3] 2025-09-04 01:09:48.646219 | orchestrator | ok: [testbed-node-4] 2025-09-04 01:09:48.646225 | orchestrator | ok: [testbed-node-5] 2025-09-04 01:09:48.646232 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:09:48.646238 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:09:48.646245 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:09:48.646251 | orchestrator | 2025-09-04 01:09:48.646258 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-04 01:09:48.646264 | orchestrator | 2025-09-04 01:09:48.646271 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-04 01:09:48.646278 | orchestrator | Thursday 04 September 2025 01:09:41 +0000 (0:00:01.734) 0:09:02.991 **** 2025-09-04 01:09:48.646289 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:09:48.646295 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:09:48.646302 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:09:48.646308 | orchestrator | 2025-09-04 01:09:48.646315 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-04 01:09:48.646322 | orchestrator | 2025-09-04 01:09:48.646328 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-04 01:09:48.646335 | orchestrator | Thursday 04 September 2025 01:09:42 +0000 (0:00:01.158) 0:09:04.150 **** 2025-09-04 01:09:48.646342 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646348 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.646355 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.646361 | orchestrator | 2025-09-04 01:09:48.646372 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-04 01:09:48.646379 | orchestrator | 2025-09-04 01:09:48.646389 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-04 01:09:48.646395 | orchestrator | Thursday 04 September 2025 01:09:43 +0000 (0:00:00.532) 0:09:04.682 **** 2025-09-04 01:09:48.646402 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-04 01:09:48.646409 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-04 01:09:48.646415 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646422 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-04 01:09:48.646428 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-04 01:09:48.646435 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646442 | orchestrator | skipping: [testbed-node-3] 2025-09-04 01:09:48.646448 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-04 01:09:48.646455 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-04 01:09:48.646461 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646468 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-04 01:09:48.646474 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-04 01:09:48.646481 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646487 | orchestrator | skipping: [testbed-node-4] 2025-09-04 01:09:48.646494 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-04 01:09:48.646500 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-04 01:09:48.646507 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646514 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-04 01:09:48.646520 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-04 01:09:48.646527 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646533 | orchestrator | skipping: [testbed-node-5] 2025-09-04 01:09:48.646540 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-04 01:09:48.646546 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-04 01:09:48.646553 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646559 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-04 01:09:48.646566 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-04 01:09:48.646573 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646579 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646586 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-04 01:09:48.646592 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-04 01:09:48.646599 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646606 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-04 01:09:48.646617 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-04 01:09:48.646623 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646630 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.646636 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-04 01:09:48.646643 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-04 01:09:48.646649 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-04 01:09:48.646656 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-04 01:09:48.646662 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-04 01:09:48.646669 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-04 01:09:48.646676 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.646682 | orchestrator | 2025-09-04 01:09:48.646689 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-04 01:09:48.646695 | orchestrator | 2025-09-04 01:09:48.646702 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-04 01:09:48.646709 | orchestrator | Thursday 04 September 2025 01:09:44 +0000 (0:00:01.329) 0:09:06.011 **** 2025-09-04 01:09:48.646715 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-04 01:09:48.646722 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-04 01:09:48.646728 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646765 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-04 01:09:48.646773 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-04 01:09:48.646779 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.646786 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-04 01:09:48.646792 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-04 01:09:48.646799 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.646805 | orchestrator | 2025-09-04 01:09:48.646812 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-04 01:09:48.646818 | orchestrator | 2025-09-04 01:09:48.646825 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-04 01:09:48.646832 | orchestrator | Thursday 04 September 2025 01:09:45 +0000 (0:00:00.736) 0:09:06.748 **** 2025-09-04 01:09:48.646838 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646845 | orchestrator | 2025-09-04 01:09:48.646851 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-04 01:09:48.646858 | orchestrator | 2025-09-04 01:09:48.646864 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-04 01:09:48.646871 | orchestrator | Thursday 04 September 2025 01:09:46 +0000 (0:00:00.657) 0:09:07.405 **** 2025-09-04 01:09:48.646877 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:09:48.646888 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:09:48.646895 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:09:48.646901 | orchestrator | 2025-09-04 01:09:48.646911 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:09:48.646918 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-04 01:09:48.646925 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-04 01:09:48.646932 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-04 01:09:48.646939 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-04 01:09:48.646945 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-04 01:09:48.646956 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-04 01:09:48.646963 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-04 01:09:48.646969 | orchestrator | 2025-09-04 01:09:48.646976 | orchestrator | 2025-09-04 01:09:48.646983 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:09:48.646989 | orchestrator | Thursday 04 September 2025 01:09:46 +0000 (0:00:00.425) 0:09:07.831 **** 2025-09-04 01:09:48.646996 | orchestrator | =============================================================================== 2025-09-04 01:09:48.647002 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.79s 2025-09-04 01:09:48.647009 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.26s 2025-09-04 01:09:48.647015 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.52s 2025-09-04 01:09:48.647022 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.18s 2025-09-04 01:09:48.647028 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.92s 2025-09-04 01:09:48.647035 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.42s 2025-09-04 01:09:48.647041 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.25s 2025-09-04 01:09:48.647048 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.57s 2025-09-04 01:09:48.647054 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.35s 2025-09-04 01:09:48.647061 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.52s 2025-09-04 01:09:48.647067 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.66s 2025-09-04 01:09:48.647074 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.59s 2025-09-04 01:09:48.647080 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.54s 2025-09-04 01:09:48.647087 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.02s 2025-09-04 01:09:48.647093 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.73s 2025-09-04 01:09:48.647100 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.18s 2025-09-04 01:09:48.647106 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.64s 2025-09-04 01:09:48.647113 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.94s 2025-09-04 01:09:48.647119 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.90s 2025-09-04 01:09:48.647126 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.07s 2025-09-04 01:09:48.647132 | orchestrator | 2025-09-04 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:51.680222 | orchestrator | 2025-09-04 01:09:51 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:51.680325 | orchestrator | 2025-09-04 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:54.723625 | orchestrator | 2025-09-04 01:09:54 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:54.723725 | orchestrator | 2025-09-04 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:09:57.772070 | orchestrator | 2025-09-04 01:09:57 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:09:57.772192 | orchestrator | 2025-09-04 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:00.821307 | orchestrator | 2025-09-04 01:10:00 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:00.821436 | orchestrator | 2025-09-04 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:03.865289 | orchestrator | 2025-09-04 01:10:03 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:03.865381 | orchestrator | 2025-09-04 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:06.901120 | orchestrator | 2025-09-04 01:10:06 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:06.901207 | orchestrator | 2025-09-04 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:09.944933 | orchestrator | 2025-09-04 01:10:09 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:09.945017 | orchestrator | 2025-09-04 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:12.988543 | orchestrator | 2025-09-04 01:10:12 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:12.988626 | orchestrator | 2025-09-04 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:16.041905 | orchestrator | 2025-09-04 01:10:16 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:16.042009 | orchestrator | 2025-09-04 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:19.085016 | orchestrator | 2025-09-04 01:10:19 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:19.085119 | orchestrator | 2025-09-04 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:22.133362 | orchestrator | 2025-09-04 01:10:22 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:22.133469 | orchestrator | 2025-09-04 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:25.174659 | orchestrator | 2025-09-04 01:10:25 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:25.174808 | orchestrator | 2025-09-04 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:28.225908 | orchestrator | 2025-09-04 01:10:28 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:28.226013 | orchestrator | 2025-09-04 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:31.269177 | orchestrator | 2025-09-04 01:10:31 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:31.269277 | orchestrator | 2025-09-04 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:34.323214 | orchestrator | 2025-09-04 01:10:34 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:34.323323 | orchestrator | 2025-09-04 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:37.362831 | orchestrator | 2025-09-04 01:10:37 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:37.362929 | orchestrator | 2025-09-04 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:40.408332 | orchestrator | 2025-09-04 01:10:40 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:40.408445 | orchestrator | 2025-09-04 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:43.462115 | orchestrator | 2025-09-04 01:10:43 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:43.462216 | orchestrator | 2025-09-04 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:46.501190 | orchestrator | 2025-09-04 01:10:46 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:46.501319 | orchestrator | 2025-09-04 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:49.545485 | orchestrator | 2025-09-04 01:10:49 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:49.545590 | orchestrator | 2025-09-04 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:52.585999 | orchestrator | 2025-09-04 01:10:52 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:52.586175 | orchestrator | 2025-09-04 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:55.627873 | orchestrator | 2025-09-04 01:10:55 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:55.627975 | orchestrator | 2025-09-04 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:10:58.663962 | orchestrator | 2025-09-04 01:10:58 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:10:58.664070 | orchestrator | 2025-09-04 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:01.716896 | orchestrator | 2025-09-04 01:11:01 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:01.717000 | orchestrator | 2025-09-04 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:04.761621 | orchestrator | 2025-09-04 01:11:04 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:04.761755 | orchestrator | 2025-09-04 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:07.806966 | orchestrator | 2025-09-04 01:11:07 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:07.807067 | orchestrator | 2025-09-04 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:10.854509 | orchestrator | 2025-09-04 01:11:10 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:10.854610 | orchestrator | 2025-09-04 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:13.906302 | orchestrator | 2025-09-04 01:11:13 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:13.906404 | orchestrator | 2025-09-04 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:16.979917 | orchestrator | 2025-09-04 01:11:16 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:16.980027 | orchestrator | 2025-09-04 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:20.057463 | orchestrator | 2025-09-04 01:11:20 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:20.059064 | orchestrator | 2025-09-04 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:23.099292 | orchestrator | 2025-09-04 01:11:23 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:23.099382 | orchestrator | 2025-09-04 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:26.143789 | orchestrator | 2025-09-04 01:11:26 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:26.143892 | orchestrator | 2025-09-04 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:29.187158 | orchestrator | 2025-09-04 01:11:29 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:29.187260 | orchestrator | 2025-09-04 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:32.225659 | orchestrator | 2025-09-04 01:11:32 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:32.225848 | orchestrator | 2025-09-04 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:35.270523 | orchestrator | 2025-09-04 01:11:35 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:35.270636 | orchestrator | 2025-09-04 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:38.312891 | orchestrator | 2025-09-04 01:11:38 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:38.312993 | orchestrator | 2025-09-04 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:41.361494 | orchestrator | 2025-09-04 01:11:41 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:41.361600 | orchestrator | 2025-09-04 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:44.403302 | orchestrator | 2025-09-04 01:11:44 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:44.403412 | orchestrator | 2025-09-04 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:47.447775 | orchestrator | 2025-09-04 01:11:47 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:47.447861 | orchestrator | 2025-09-04 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:50.493333 | orchestrator | 2025-09-04 01:11:50 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:50.493438 | orchestrator | 2025-09-04 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:53.539065 | orchestrator | 2025-09-04 01:11:53 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:53.539171 | orchestrator | 2025-09-04 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:56.590115 | orchestrator | 2025-09-04 01:11:56 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:56.590194 | orchestrator | 2025-09-04 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:11:59.636934 | orchestrator | 2025-09-04 01:11:59 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:11:59.637044 | orchestrator | 2025-09-04 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:02.681633 | orchestrator | 2025-09-04 01:12:02 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:02.681797 | orchestrator | 2025-09-04 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:05.730852 | orchestrator | 2025-09-04 01:12:05 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:05.730966 | orchestrator | 2025-09-04 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:08.783079 | orchestrator | 2025-09-04 01:12:08 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:08.783184 | orchestrator | 2025-09-04 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:11.828862 | orchestrator | 2025-09-04 01:12:11 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:11.828972 | orchestrator | 2025-09-04 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:14.876085 | orchestrator | 2025-09-04 01:12:14 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:14.876180 | orchestrator | 2025-09-04 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:17.915454 | orchestrator | 2025-09-04 01:12:17 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:17.915591 | orchestrator | 2025-09-04 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:20.964051 | orchestrator | 2025-09-04 01:12:20 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:20.964148 | orchestrator | 2025-09-04 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:24.018149 | orchestrator | 2025-09-04 01:12:24 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:24.018257 | orchestrator | 2025-09-04 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:27.060759 | orchestrator | 2025-09-04 01:12:27 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:27.060859 | orchestrator | 2025-09-04 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:30.105170 | orchestrator | 2025-09-04 01:12:30 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:30.105281 | orchestrator | 2025-09-04 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:33.145952 | orchestrator | 2025-09-04 01:12:33 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:33.146127 | orchestrator | 2025-09-04 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:36.191047 | orchestrator | 2025-09-04 01:12:36 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:36.191156 | orchestrator | 2025-09-04 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:39.234431 | orchestrator | 2025-09-04 01:12:39 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:39.234535 | orchestrator | 2025-09-04 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:42.277552 | orchestrator | 2025-09-04 01:12:42 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:42.277640 | orchestrator | 2025-09-04 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:45.319519 | orchestrator | 2025-09-04 01:12:45 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:45.319625 | orchestrator | 2025-09-04 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:48.368834 | orchestrator | 2025-09-04 01:12:48 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:48.368940 | orchestrator | 2025-09-04 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:51.415113 | orchestrator | 2025-09-04 01:12:51 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:51.415207 | orchestrator | 2025-09-04 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:54.460927 | orchestrator | 2025-09-04 01:12:54 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:54.461035 | orchestrator | 2025-09-04 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:12:57.507113 | orchestrator | 2025-09-04 01:12:57 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:12:57.507209 | orchestrator | 2025-09-04 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:13:00.545558 | orchestrator | 2025-09-04 01:13:00 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:13:00.545714 | orchestrator | 2025-09-04 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:13:03.586434 | orchestrator | 2025-09-04 01:13:03 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state STARTED 2025-09-04 01:13:03.586563 | orchestrator | 2025-09-04 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-09-04 01:13:06.637157 | orchestrator | 2025-09-04 01:13:06.637530 | orchestrator | 2025-09-04 01:13:06 | INFO  | Task c715f4bc-736e-457e-8e73-77a33dc019d7 is in state SUCCESS 2025-09-04 01:13:06.639228 | orchestrator | 2025-09-04 01:13:06.639311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-04 01:13:06.639327 | orchestrator | 2025-09-04 01:13:06.639339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-04 01:13:06.639350 | orchestrator | Thursday 04 September 2025 01:08:21 +0000 (0:00:00.302) 0:00:00.302 **** 2025-09-04 01:13:06.639362 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.639374 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:13:06.639385 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:13:06.639395 | orchestrator | 2025-09-04 01:13:06.639407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-04 01:13:06.639417 | orchestrator | Thursday 04 September 2025 01:08:21 +0000 (0:00:00.301) 0:00:00.604 **** 2025-09-04 01:13:06.639428 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-04 01:13:06.639440 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-04 01:13:06.639451 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-04 01:13:06.639462 | orchestrator | 2025-09-04 01:13:06.639473 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-04 01:13:06.639483 | orchestrator | 2025-09-04 01:13:06.639494 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.639505 | orchestrator | Thursday 04 September 2025 01:08:22 +0000 (0:00:00.469) 0:00:01.073 **** 2025-09-04 01:13:06.639516 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:13:06.639527 | orchestrator | 2025-09-04 01:13:06.639538 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-04 01:13:06.639548 | orchestrator | Thursday 04 September 2025 01:08:22 +0000 (0:00:00.598) 0:00:01.672 **** 2025-09-04 01:13:06.639559 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-04 01:13:06.639570 | orchestrator | 2025-09-04 01:13:06.639581 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-04 01:13:06.639591 | orchestrator | Thursday 04 September 2025 01:08:26 +0000 (0:00:03.503) 0:00:05.175 **** 2025-09-04 01:13:06.639602 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-04 01:13:06.639614 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-04 01:13:06.639624 | orchestrator | 2025-09-04 01:13:06.639635 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-04 01:13:06.639672 | orchestrator | Thursday 04 September 2025 01:08:32 +0000 (0:00:06.813) 0:00:11.989 **** 2025-09-04 01:13:06.639683 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-04 01:13:06.639694 | orchestrator | 2025-09-04 01:13:06.639705 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-04 01:13:06.639715 | orchestrator | Thursday 04 September 2025 01:08:36 +0000 (0:00:03.653) 0:00:15.643 **** 2025-09-04 01:13:06.639726 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-04 01:13:06.639737 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-04 01:13:06.639748 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-04 01:13:06.639759 | orchestrator | 2025-09-04 01:13:06.639770 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-04 01:13:06.639781 | orchestrator | Thursday 04 September 2025 01:08:45 +0000 (0:00:08.824) 0:00:24.467 **** 2025-09-04 01:13:06.639792 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-04 01:13:06.639826 | orchestrator | 2025-09-04 01:13:06.639838 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-04 01:13:06.639850 | orchestrator | Thursday 04 September 2025 01:08:49 +0000 (0:00:03.696) 0:00:28.164 **** 2025-09-04 01:13:06.639863 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-04 01:13:06.639875 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-04 01:13:06.639887 | orchestrator | 2025-09-04 01:13:06.639900 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-04 01:13:06.639912 | orchestrator | Thursday 04 September 2025 01:08:57 +0000 (0:00:07.841) 0:00:36.006 **** 2025-09-04 01:13:06.639925 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-04 01:13:06.639937 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-04 01:13:06.639949 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-04 01:13:06.639962 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-04 01:13:06.639975 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-04 01:13:06.639987 | orchestrator | 2025-09-04 01:13:06.640000 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.640026 | orchestrator | Thursday 04 September 2025 01:09:12 +0000 (0:00:15.919) 0:00:51.925 **** 2025-09-04 01:13:06.640039 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:13:06.640052 | orchestrator | 2025-09-04 01:13:06.640064 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-04 01:13:06.640077 | orchestrator | Thursday 04 September 2025 01:09:13 +0000 (0:00:00.889) 0:00:52.815 **** 2025-09-04 01:13:06.640090 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640102 | orchestrator | 2025-09-04 01:13:06.640115 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-04 01:13:06.640127 | orchestrator | Thursday 04 September 2025 01:09:17 +0000 (0:00:03.667) 0:00:56.482 **** 2025-09-04 01:13:06.640139 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640151 | orchestrator | 2025-09-04 01:13:06.640164 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-04 01:13:06.640291 | orchestrator | Thursday 04 September 2025 01:09:22 +0000 (0:00:04.566) 0:01:01.049 **** 2025-09-04 01:13:06.640308 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.640320 | orchestrator | 2025-09-04 01:13:06.640330 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-04 01:13:06.640341 | orchestrator | Thursday 04 September 2025 01:09:25 +0000 (0:00:03.251) 0:01:04.301 **** 2025-09-04 01:13:06.640352 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-04 01:13:06.640363 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-04 01:13:06.640373 | orchestrator | 2025-09-04 01:13:06.640384 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-04 01:13:06.640395 | orchestrator | Thursday 04 September 2025 01:09:35 +0000 (0:00:10.087) 0:01:14.388 **** 2025-09-04 01:13:06.640405 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-04 01:13:06.640417 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-04 01:13:06.640429 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-04 01:13:06.640441 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-04 01:13:06.640452 | orchestrator | 2025-09-04 01:13:06.640463 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-04 01:13:06.640473 | orchestrator | Thursday 04 September 2025 01:09:52 +0000 (0:00:17.008) 0:01:31.397 **** 2025-09-04 01:13:06.640495 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640506 | orchestrator | 2025-09-04 01:13:06.640517 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-04 01:13:06.640528 | orchestrator | Thursday 04 September 2025 01:09:56 +0000 (0:00:04.464) 0:01:35.862 **** 2025-09-04 01:13:06.640538 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640549 | orchestrator | 2025-09-04 01:13:06.640559 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-04 01:13:06.640570 | orchestrator | Thursday 04 September 2025 01:10:03 +0000 (0:00:06.191) 0:01:42.054 **** 2025-09-04 01:13:06.640580 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.640591 | orchestrator | 2025-09-04 01:13:06.640626 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-04 01:13:06.640637 | orchestrator | Thursday 04 September 2025 01:10:03 +0000 (0:00:00.224) 0:01:42.278 **** 2025-09-04 01:13:06.640672 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640683 | orchestrator | 2025-09-04 01:13:06.640694 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.640704 | orchestrator | Thursday 04 September 2025 01:10:07 +0000 (0:00:04.656) 0:01:46.934 **** 2025-09-04 01:13:06.640715 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:13:06.640726 | orchestrator | 2025-09-04 01:13:06.640737 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-04 01:13:06.640747 | orchestrator | Thursday 04 September 2025 01:10:09 +0000 (0:00:01.139) 0:01:48.074 **** 2025-09-04 01:13:06.640758 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.640769 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.640779 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640790 | orchestrator | 2025-09-04 01:13:06.640801 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-04 01:13:06.640811 | orchestrator | Thursday 04 September 2025 01:10:14 +0000 (0:00:05.653) 0:01:53.728 **** 2025-09-04 01:13:06.640822 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.640833 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640843 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.640854 | orchestrator | 2025-09-04 01:13:06.640864 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-04 01:13:06.640875 | orchestrator | Thursday 04 September 2025 01:10:19 +0000 (0:00:04.797) 0:01:58.525 **** 2025-09-04 01:13:06.640885 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.640896 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.640907 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.640917 | orchestrator | 2025-09-04 01:13:06.640928 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-04 01:13:06.640939 | orchestrator | Thursday 04 September 2025 01:10:20 +0000 (0:00:00.791) 0:01:59.317 **** 2025-09-04 01:13:06.640949 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:13:06.640960 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.640971 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:13:06.640981 | orchestrator | 2025-09-04 01:13:06.640992 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-04 01:13:06.641002 | orchestrator | Thursday 04 September 2025 01:10:22 +0000 (0:00:01.926) 0:02:01.244 **** 2025-09-04 01:13:06.641020 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.641031 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.641042 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.641052 | orchestrator | 2025-09-04 01:13:06.641063 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-04 01:13:06.641073 | orchestrator | Thursday 04 September 2025 01:10:23 +0000 (0:00:01.289) 0:02:02.534 **** 2025-09-04 01:13:06.641084 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.641094 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.641117 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.641129 | orchestrator | 2025-09-04 01:13:06.641140 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-04 01:13:06.641150 | orchestrator | Thursday 04 September 2025 01:10:24 +0000 (0:00:01.197) 0:02:03.731 **** 2025-09-04 01:13:06.641161 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.641172 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.641183 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.641194 | orchestrator | 2025-09-04 01:13:06.641237 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-04 01:13:06.641250 | orchestrator | Thursday 04 September 2025 01:10:26 +0000 (0:00:01.962) 0:02:05.694 **** 2025-09-04 01:13:06.641261 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.641271 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.641282 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.641292 | orchestrator | 2025-09-04 01:13:06.641303 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-04 01:13:06.641314 | orchestrator | Thursday 04 September 2025 01:10:28 +0000 (0:00:01.488) 0:02:07.183 **** 2025-09-04 01:13:06.641324 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641335 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:13:06.641345 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:13:06.641356 | orchestrator | 2025-09-04 01:13:06.641366 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-04 01:13:06.641377 | orchestrator | Thursday 04 September 2025 01:10:29 +0000 (0:00:00.891) 0:02:08.074 **** 2025-09-04 01:13:06.641388 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:13:06.641398 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:13:06.641408 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641419 | orchestrator | 2025-09-04 01:13:06.641430 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.641440 | orchestrator | Thursday 04 September 2025 01:10:31 +0000 (0:00:02.734) 0:02:10.808 **** 2025-09-04 01:13:06.641451 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:13:06.641462 | orchestrator | 2025-09-04 01:13:06.641472 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-04 01:13:06.641483 | orchestrator | Thursday 04 September 2025 01:10:32 +0000 (0:00:00.503) 0:02:11.312 **** 2025-09-04 01:13:06.641493 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641504 | orchestrator | 2025-09-04 01:13:06.641515 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-04 01:13:06.641525 | orchestrator | Thursday 04 September 2025 01:10:36 +0000 (0:00:03.839) 0:02:15.152 **** 2025-09-04 01:13:06.641536 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641546 | orchestrator | 2025-09-04 01:13:06.641557 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-04 01:13:06.641567 | orchestrator | Thursday 04 September 2025 01:10:39 +0000 (0:00:03.080) 0:02:18.232 **** 2025-09-04 01:13:06.641578 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-04 01:13:06.641588 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-04 01:13:06.641599 | orchestrator | 2025-09-04 01:13:06.641610 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-04 01:13:06.641620 | orchestrator | Thursday 04 September 2025 01:10:45 +0000 (0:00:06.596) 0:02:24.829 **** 2025-09-04 01:13:06.641631 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641690 | orchestrator | 2025-09-04 01:13:06.641703 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-04 01:13:06.641714 | orchestrator | Thursday 04 September 2025 01:10:49 +0000 (0:00:03.354) 0:02:28.183 **** 2025-09-04 01:13:06.641724 | orchestrator | ok: [testbed-node-0] 2025-09-04 01:13:06.641735 | orchestrator | ok: [testbed-node-1] 2025-09-04 01:13:06.641745 | orchestrator | ok: [testbed-node-2] 2025-09-04 01:13:06.641756 | orchestrator | 2025-09-04 01:13:06.641767 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-04 01:13:06.641786 | orchestrator | Thursday 04 September 2025 01:10:49 +0000 (0:00:00.314) 0:02:28.497 **** 2025-09-04 01:13:06.641800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.641849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.641863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.641875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.641887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.641917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.641929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.641945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.641982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.641995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642134 | orchestrator | 2025-09-04 01:13:06.642145 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-04 01:13:06.642157 | orchestrator | Thursday 04 September 2025 01:10:51 +0000 (0:00:02.413) 0:02:30.911 **** 2025-09-04 01:13:06.642168 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.642179 | orchestrator | 2025-09-04 01:13:06.642217 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-04 01:13:06.642229 | orchestrator | Thursday 04 September 2025 01:10:52 +0000 (0:00:00.124) 0:02:31.036 **** 2025-09-04 01:13:06.642240 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.642250 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:13:06.642261 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:13:06.642272 | orchestrator | 2025-09-04 01:13:06.642282 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-04 01:13:06.642293 | orchestrator | Thursday 04 September 2025 01:10:52 +0000 (0:00:00.488) 0:02:31.525 **** 2025-09-04 01:13:06.642305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.642324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.642336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.642375 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.642413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.642426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.642437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.642477 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:13:06.642498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.642536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.642548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.642579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.642590 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:13:06.642601 | orchestrator | 2025-09-04 01:13:06.642612 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.642623 | orchestrator | Thursday 04 September 2025 01:10:53 +0000 (0:00:00.657) 0:02:32.182 **** 2025-09-04 01:13:06.642633 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-04 01:13:06.642665 | orchestrator | 2025-09-04 01:13:06.642676 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-04 01:13:06.642687 | orchestrator | Thursday 04 September 2025 01:10:53 +0000 (0:00:00.534) 0:02:32.717 **** 2025-09-04 01:13:06.642703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.642742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, '2025-09-04 01:13:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:06.642756 | orchestrator | haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.642769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.642788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.642799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.642810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.642826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.642971 | orchestrator | 2025-09-04 01:13:06.642982 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-04 01:13:06.642999 | orchestrator | Thursday 04 September 2025 01:10:58 +0000 (0:00:05.262) 0:02:37.979 **** 2025-09-04 01:13:06.643011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643072 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.643093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643159 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:13:06.643174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643244 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:13:06.643255 | orchestrator | 2025-09-04 01:13:06.643266 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-04 01:13:06.643277 | orchestrator | Thursday 04 September 2025 01:10:59 +0000 (0:00:00.911) 0:02:38.891 **** 2025-09-04 01:13:06.643288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643362 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.643373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643450 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:13:06.643462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-04 01:13:06.643473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-04 01:13:06.643484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-04 01:13:06.643517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-04 01:13:06.643529 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:13:06.643540 | orchestrator | 2025-09-04 01:13:06.643551 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-04 01:13:06.643561 | orchestrator | Thursday 04 September 2025 01:11:00 +0000 (0:00:00.849) 0:02:39.741 **** 2025-09-04 01:13:06.643580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.643804 | orchestrator | 2025-09-04 01:13:06.643815 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-04 01:13:06.643826 | orchestrator | Thursday 04 September 2025 01:11:05 +0000 (0:00:05.159) 0:02:44.900 **** 2025-09-04 01:13:06.643837 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-04 01:13:06.643848 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-04 01:13:06.643859 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-04 01:13:06.643870 | orchestrator | 2025-09-04 01:13:06.643881 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-04 01:13:06.643892 | orchestrator | Thursday 04 September 2025 01:11:08 +0000 (0:00:02.100) 0:02:47.001 **** 2025-09-04 01:13:06.643904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.643957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.643996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644120 | orchestrator | 2025-09-04 01:13:06.644131 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-04 01:13:06.644142 | orchestrator | Thursday 04 September 2025 01:11:24 +0000 (0:00:16.226) 0:03:03.227 **** 2025-09-04 01:13:06.644153 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.644164 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.644174 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.644185 | orchestrator | 2025-09-04 01:13:06.644196 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-04 01:13:06.644207 | orchestrator | Thursday 04 September 2025 01:11:25 +0000 (0:00:01.527) 0:03:04.755 **** 2025-09-04 01:13:06.644223 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644234 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644245 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644255 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644266 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644277 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644288 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644299 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644310 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644320 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644331 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644341 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644352 | orchestrator | 2025-09-04 01:13:06.644363 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-04 01:13:06.644374 | orchestrator | Thursday 04 September 2025 01:11:30 +0000 (0:00:05.215) 0:03:09.970 **** 2025-09-04 01:13:06.644385 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644395 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644406 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644424 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644435 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644445 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644456 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644467 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644477 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644488 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644499 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644509 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644520 | orchestrator | 2025-09-04 01:13:06.644531 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-04 01:13:06.644542 | orchestrator | Thursday 04 September 2025 01:11:36 +0000 (0:00:05.129) 0:03:15.100 **** 2025-09-04 01:13:06.644553 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644564 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644574 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-04 01:13:06.644585 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644595 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644606 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-04 01:13:06.644617 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644628 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644638 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-04 01:13:06.644704 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644715 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644726 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-04 01:13:06.644737 | orchestrator | 2025-09-04 01:13:06.644748 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-04 01:13:06.644758 | orchestrator | Thursday 04 September 2025 01:11:41 +0000 (0:00:05.367) 0:03:20.467 **** 2025-09-04 01:13:06.644775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.644794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.644817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-04 01:13:06.644829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.644840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.644852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-04 01:13:06.644868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.644982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-04 01:13:06.645002 | orchestrator | 2025-09-04 01:13:06.645013 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-04 01:13:06.645024 | orchestrator | Thursday 04 September 2025 01:11:45 +0000 (0:00:03.542) 0:03:24.010 **** 2025-09-04 01:13:06.645034 | orchestrator | skipping: [testbed-node-0] 2025-09-04 01:13:06.645045 | orchestrator | skipping: [testbed-node-1] 2025-09-04 01:13:06.645056 | orchestrator | skipping: [testbed-node-2] 2025-09-04 01:13:06.645067 | orchestrator | 2025-09-04 01:13:06.645078 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-04 01:13:06.645088 | orchestrator | Thursday 04 September 2025 01:11:45 +0000 (0:00:00.296) 0:03:24.307 **** 2025-09-04 01:13:06.645099 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645110 | orchestrator | 2025-09-04 01:13:06.645121 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-04 01:13:06.645132 | orchestrator | Thursday 04 September 2025 01:11:47 +0000 (0:00:02.054) 0:03:26.361 **** 2025-09-04 01:13:06.645143 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645153 | orchestrator | 2025-09-04 01:13:06.645164 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-04 01:13:06.645175 | orchestrator | Thursday 04 September 2025 01:11:49 +0000 (0:00:02.051) 0:03:28.413 **** 2025-09-04 01:13:06.645186 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645196 | orchestrator | 2025-09-04 01:13:06.645207 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-04 01:13:06.645218 | orchestrator | Thursday 04 September 2025 01:11:51 +0000 (0:00:02.181) 0:03:30.595 **** 2025-09-04 01:13:06.645227 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645237 | orchestrator | 2025-09-04 01:13:06.645247 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-04 01:13:06.645256 | orchestrator | Thursday 04 September 2025 01:11:53 +0000 (0:00:02.114) 0:03:32.709 **** 2025-09-04 01:13:06.645265 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645275 | orchestrator | 2025-09-04 01:13:06.645284 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-04 01:13:06.645294 | orchestrator | Thursday 04 September 2025 01:12:14 +0000 (0:00:20.908) 0:03:53.618 **** 2025-09-04 01:13:06.645303 | orchestrator | 2025-09-04 01:13:06.645313 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-04 01:13:06.645323 | orchestrator | Thursday 04 September 2025 01:12:14 +0000 (0:00:00.088) 0:03:53.706 **** 2025-09-04 01:13:06.645332 | orchestrator | 2025-09-04 01:13:06.645342 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-04 01:13:06.645352 | orchestrator | Thursday 04 September 2025 01:12:14 +0000 (0:00:00.067) 0:03:53.774 **** 2025-09-04 01:13:06.645361 | orchestrator | 2025-09-04 01:13:06.645371 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-04 01:13:06.645380 | orchestrator | Thursday 04 September 2025 01:12:14 +0000 (0:00:00.064) 0:03:53.839 **** 2025-09-04 01:13:06.645390 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645399 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.645409 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.645418 | orchestrator | 2025-09-04 01:13:06.645428 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-04 01:13:06.645437 | orchestrator | Thursday 04 September 2025 01:12:31 +0000 (0:00:16.805) 0:04:10.644 **** 2025-09-04 01:13:06.645447 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.645456 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645466 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.645481 | orchestrator | 2025-09-04 01:13:06.645491 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-04 01:13:06.645501 | orchestrator | Thursday 04 September 2025 01:12:43 +0000 (0:00:11.859) 0:04:22.504 **** 2025-09-04 01:13:06.645510 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645520 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.645529 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.645539 | orchestrator | 2025-09-04 01:13:06.645548 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-04 01:13:06.645558 | orchestrator | Thursday 04 September 2025 01:12:53 +0000 (0:00:10.235) 0:04:32.739 **** 2025-09-04 01:13:06.645567 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645577 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.645586 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.645595 | orchestrator | 2025-09-04 01:13:06.645605 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-04 01:13:06.645615 | orchestrator | Thursday 04 September 2025 01:12:59 +0000 (0:00:05.547) 0:04:38.287 **** 2025-09-04 01:13:06.645624 | orchestrator | changed: [testbed-node-0] 2025-09-04 01:13:06.645633 | orchestrator | changed: [testbed-node-1] 2025-09-04 01:13:06.645664 | orchestrator | changed: [testbed-node-2] 2025-09-04 01:13:06.645674 | orchestrator | 2025-09-04 01:13:06.645684 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-04 01:13:06.645694 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-04 01:13:06.645704 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:13:06.645714 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-04 01:13:06.645723 | orchestrator | 2025-09-04 01:13:06.645733 | orchestrator | 2025-09-04 01:13:06.645743 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-04 01:13:06.645758 | orchestrator | Thursday 04 September 2025 01:13:04 +0000 (0:00:05.676) 0:04:43.963 **** 2025-09-04 01:13:06.645768 | orchestrator | =============================================================================== 2025-09-04 01:13:06.645778 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.91s 2025-09-04 01:13:06.645787 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.01s 2025-09-04 01:13:06.645797 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.81s 2025-09-04 01:13:06.645806 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.23s 2025-09-04 01:13:06.645816 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.92s 2025-09-04 01:13:06.645825 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.86s 2025-09-04 01:13:06.645835 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.24s 2025-09-04 01:13:06.645844 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.09s 2025-09-04 01:13:06.645854 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.82s 2025-09-04 01:13:06.645863 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.84s 2025-09-04 01:13:06.645873 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.81s 2025-09-04 01:13:06.645882 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.60s 2025-09-04 01:13:06.645892 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.19s 2025-09-04 01:13:06.645901 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.68s 2025-09-04 01:13:06.645911 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.65s 2025-09-04 01:13:06.645927 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.55s 2025-09-04 01:13:06.645936 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.37s 2025-09-04 01:13:06.645946 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.26s 2025-09-04 01:13:06.645955 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.22s 2025-09-04 01:13:06.645965 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.16s 2025-09-04 01:13:09.679152 | orchestrator | 2025-09-04 01:13:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:12.718569 | orchestrator | 2025-09-04 01:13:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:15.760467 | orchestrator | 2025-09-04 01:13:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:18.808099 | orchestrator | 2025-09-04 01:13:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:21.847031 | orchestrator | 2025-09-04 01:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:24.890157 | orchestrator | 2025-09-04 01:13:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:27.929958 | orchestrator | 2025-09-04 01:13:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:30.970094 | orchestrator | 2025-09-04 01:13:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:34.011854 | orchestrator | 2025-09-04 01:13:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:37.056044 | orchestrator | 2025-09-04 01:13:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:40.090921 | orchestrator | 2025-09-04 01:13:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:43.125504 | orchestrator | 2025-09-04 01:13:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:46.170429 | orchestrator | 2025-09-04 01:13:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:49.212223 | orchestrator | 2025-09-04 01:13:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:52.258861 | orchestrator | 2025-09-04 01:13:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:55.298244 | orchestrator | 2025-09-04 01:13:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:13:58.342383 | orchestrator | 2025-09-04 01:13:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:14:01.386458 | orchestrator | 2025-09-04 01:14:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:14:04.430739 | orchestrator | 2025-09-04 01:14:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-04 01:14:07.468001 | orchestrator | 2025-09-04 01:14:07.775751 | orchestrator | 2025-09-04 01:14:07.780419 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Sep 4 01:14:07 UTC 2025 2025-09-04 01:14:07.780452 | orchestrator | 2025-09-04 01:14:08.182797 | orchestrator | ok: Runtime: 0:34:34.468191 2025-09-04 01:14:08.432631 | 2025-09-04 01:14:08.432778 | TASK [Bootstrap services] 2025-09-04 01:14:09.161724 | orchestrator | 2025-09-04 01:14:09.161920 | orchestrator | # BOOTSTRAP 2025-09-04 01:14:09.161942 | orchestrator | 2025-09-04 01:14:09.161956 | orchestrator | + set -e 2025-09-04 01:14:09.161968 | orchestrator | + echo 2025-09-04 01:14:09.161981 | orchestrator | + echo '# BOOTSTRAP' 2025-09-04 01:14:09.161996 | orchestrator | + echo 2025-09-04 01:14:09.162064 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-04 01:14:09.170454 | orchestrator | + set -e 2025-09-04 01:14:09.170479 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-04 01:14:13.647030 | orchestrator | 2025-09-04 01:14:13 | INFO  | It takes a moment until task db643e35-9bfb-4738-a729-75cc1425b1de (flavor-manager) has been started and output is visible here. 2025-09-04 01:14:17.492449 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-04 01:14:17.492546 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-04 01:14:17.492572 | orchestrator | │ in run │ 2025-09-04 01:14:17.492584 | orchestrator | │ │ 2025-09-04 01:14:17.492596 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-04 01:14:17.492620 | orchestrator | │ 192 │ │ 2025-09-04 01:14:17.492659 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-04 01:14:17.492674 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-04 01:14:17.492685 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-04 01:14:17.492696 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-04 01:14:17.492707 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-04 01:14:17.492718 | orchestrator | │ │ 2025-09-04 01:14:17.492730 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-04 01:14:17.492753 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-04 01:14:17.492764 | orchestrator | │ │ debug = False │ │ 2025-09-04 01:14:17.492775 | orchestrator | │ │ definitions = { │ │ 2025-09-04 01:14:17.492786 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-04 01:14:17.492797 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-04 01:14:17.492808 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-04 01:14:17.492819 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-04 01:14:17.492830 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-04 01:14:17.492841 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-04 01:14:17.492852 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-04 01:14:17.492863 | orchestrator | │ │ │ ], │ │ 2025-09-04 01:14:17.492873 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-04 01:14:17.492884 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.492895 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-04 01:14:17.492931 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.492943 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-04 01:14:17.492954 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.492965 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-04 01:14:17.492976 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.492987 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-04 01:14:17.492997 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-04 01:14:17.493008 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.493019 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.493030 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.493041 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-04 01:14:17.493051 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.493062 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-04 01:14:17.493073 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-04 01:14:17.493084 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-04 01:14:17.493112 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.493124 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-04 01:14:17.493134 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-04 01:14:17.493145 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.493156 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.493166 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.493177 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-04 01:14:17.493193 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.493205 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-04 01:14:17.493215 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.493226 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.493238 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.493248 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-04 01:14:17.493259 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-04 01:14:17.493270 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.493281 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.493292 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.493303 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-04 01:14:17.493314 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.493396 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-04 01:14:17.493408 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-04 01:14:17.493419 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.493430 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.493441 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-04 01:14:17.493451 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-04 01:14:17.493462 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.493473 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.493484 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.493494 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-04 01:14:17.493505 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.493516 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.493527 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.493538 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.493548 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.493559 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-04 01:14:17.493570 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-04 01:14:17.493580 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.493591 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.493602 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.493613 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-04 01:14:17.493624 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.493657 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.493669 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-04 01:14:17.493688 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.528265 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.528288 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-04 01:14:17.528300 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-04 01:14:17.528311 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.528321 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.528332 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.528343 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-04 01:14:17.528354 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.528375 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-04 01:14:17.528387 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.528397 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.528408 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.528419 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-04 01:14:17.528430 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-04 01:14:17.528441 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.528451 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.528462 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.528473 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-04 01:14:17.528483 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.528494 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-04 01:14:17.528505 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-04 01:14:17.528516 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.528527 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.528538 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-04 01:14:17.528548 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-04 01:14:17.528559 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.528570 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.528581 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.528591 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-04 01:14:17.528602 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-04 01:14:17.528614 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.528624 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.528653 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.528664 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.528675 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-04 01:14:17.528686 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-04 01:14:17.528696 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.528713 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.528724 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.528735 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-04 01:14:17.528746 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-04 01:14:17.528762 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.528773 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-04 01:14:17.528792 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.528803 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.528814 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-04 01:14:17.528825 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-04 01:14:17.528836 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.528847 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.528858 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-04 01:14:17.528868 | orchestrator | │ │ │ ] │ │ 2025-09-04 01:14:17.528879 | orchestrator | │ │ } │ │ 2025-09-04 01:14:17.528890 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-04 01:14:17.528901 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-04 01:14:17.528912 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-04 01:14:17.528923 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-04 01:14:17.528934 | orchestrator | │ │ name = 'local' │ │ 2025-09-04 01:14:17.528944 | orchestrator | │ │ recommended = True │ │ 2025-09-04 01:14:17.528955 | orchestrator | │ │ url = None │ │ 2025-09-04 01:14:17.528967 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-04 01:14:17.528980 | orchestrator | │ │ 2025-09-04 01:14:17.528991 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-04 01:14:17.529002 | orchestrator | │ in __init__ │ 2025-09-04 01:14:17.529013 | orchestrator | │ │ 2025-09-04 01:14:17.529023 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-04 01:14:17.529034 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-04 01:14:17.529045 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-04 01:14:17.529056 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-04 01:14:17.529066 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-04 01:14:17.529077 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-04 01:14:17.529088 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-04 01:14:17.529099 | orchestrator | │ │ 2025-09-04 01:14:17.529114 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-04 01:14:17.529132 | orchestrator | │ │ cloud = │ │ 2025-09-04 01:14:17.529154 | orchestrator | │ │ definitions = { │ │ 2025-09-04 01:14:17.529165 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-04 01:14:17.529175 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-04 01:14:17.529186 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-04 01:14:17.529197 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-04 01:14:17.529208 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-04 01:14:17.529219 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-04 01:14:17.529229 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-04 01:14:17.529240 | orchestrator | │ │ │ ], │ │ 2025-09-04 01:14:17.529251 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-04 01:14:17.529267 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.555894 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-04 01:14:17.555920 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.555932 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-04 01:14:17.555943 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.555954 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-04 01:14:17.555964 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.555975 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-04 01:14:17.555986 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-04 01:14:17.555997 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556008 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556019 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556029 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-04 01:14:17.556040 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556051 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-04 01:14:17.556062 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-04 01:14:17.556072 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-04 01:14:17.556083 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556094 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-04 01:14:17.556105 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-04 01:14:17.556115 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556136 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556147 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556158 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-04 01:14:17.556169 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556180 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-04 01:14:17.556190 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.556201 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556212 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556223 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-04 01:14:17.556234 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-04 01:14:17.556245 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556255 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556273 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556284 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-04 01:14:17.556295 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556306 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-04 01:14:17.556317 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-04 01:14:17.556328 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556338 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556349 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-04 01:14:17.556360 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-04 01:14:17.556371 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556382 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556399 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556411 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-04 01:14:17.556422 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556433 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.556444 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.556455 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556466 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556477 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-04 01:14:17.556488 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-04 01:14:17.556498 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556515 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556526 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556537 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-04 01:14:17.556548 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556559 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.556569 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-04 01:14:17.556580 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556591 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556602 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-04 01:14:17.556612 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-04 01:14:17.556623 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556658 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556669 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556680 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-04 01:14:17.556691 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556702 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-04 01:14:17.556713 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.556724 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556736 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556747 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-04 01:14:17.556758 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-04 01:14:17.556769 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556780 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.556791 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.556802 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-04 01:14:17.556813 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-04 01:14:17.556824 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-04 01:14:17.556835 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-04 01:14:17.556846 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.556857 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.556868 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-04 01:14:17.556879 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-04 01:14:17.556890 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.556912 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.608800 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.608883 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-04 01:14:17.608920 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-04 01:14:17.608932 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.608944 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-04 01:14:17.608955 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.608965 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.608976 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-04 01:14:17.608987 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-04 01:14:17.608997 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.609008 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.609018 | orchestrator | │ │ │ │ { │ │ 2025-09-04 01:14:17.609029 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-04 01:14:17.609040 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-04 01:14:17.609050 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-04 01:14:17.609060 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-04 01:14:17.609071 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-04 01:14:17.609082 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-04 01:14:17.609092 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-04 01:14:17.609103 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-04 01:14:17.609114 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-04 01:14:17.609124 | orchestrator | │ │ │ │ }, │ │ 2025-09-04 01:14:17.609135 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-04 01:14:17.609146 | orchestrator | │ │ │ ] │ │ 2025-09-04 01:14:17.609156 | orchestrator | │ │ } │ │ 2025-09-04 01:14:17.609167 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-04 01:14:17.609177 | orchestrator | │ │ recommended = True │ │ 2025-09-04 01:14:17.609188 | orchestrator | │ │ self = │ │ 2025-09-04 01:14:17.609210 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-04 01:14:17.609224 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-04 01:14:17.609253 | orchestrator | KeyError: 'recommended' 2025-09-04 01:14:18.055024 | orchestrator | ERROR 2025-09-04 01:14:18.055183 | orchestrator | { 2025-09-04 01:14:18.055220 | orchestrator | "delta": "0:00:09.160764", 2025-09-04 01:14:18.055258 | orchestrator | "end": "2025-09-04 01:14:17.945571", 2025-09-04 01:14:18.055281 | orchestrator | "msg": "non-zero return code", 2025-09-04 01:14:18.055301 | orchestrator | "rc": 1, 2025-09-04 01:14:18.055319 | orchestrator | "start": "2025-09-04 01:14:08.784807" 2025-09-04 01:14:18.055338 | orchestrator | } failure 2025-09-04 01:14:18.068203 | 2025-09-04 01:14:18.068307 | PLAY RECAP 2025-09-04 01:14:18.068360 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-04 01:14:18.068387 | 2025-09-04 01:14:18.314697 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-04 01:14:18.318618 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-04 01:14:19.071622 | 2025-09-04 01:14:19.071791 | PLAY [Post output play] 2025-09-04 01:14:19.088314 | 2025-09-04 01:14:19.088478 | LOOP [stage-output : Register sources] 2025-09-04 01:14:19.145448 | 2025-09-04 01:14:19.145733 | TASK [stage-output : Check sudo] 2025-09-04 01:14:19.962594 | orchestrator | sudo: a password is required 2025-09-04 01:14:20.182075 | orchestrator | ok: Runtime: 0:00:00.015212 2025-09-04 01:14:20.198986 | 2025-09-04 01:14:20.199169 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-04 01:14:20.243806 | 2025-09-04 01:14:20.244310 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-04 01:14:20.311839 | orchestrator | ok 2025-09-04 01:14:20.319923 | 2025-09-04 01:14:20.320052 | LOOP [stage-output : Ensure target folders exist] 2025-09-04 01:14:20.739415 | orchestrator | ok: "docs" 2025-09-04 01:14:20.739746 | 2025-09-04 01:14:20.963766 | orchestrator | ok: "artifacts" 2025-09-04 01:14:21.190104 | orchestrator | ok: "logs" 2025-09-04 01:14:21.211011 | 2025-09-04 01:14:21.211191 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-04 01:14:21.250954 | 2025-09-04 01:14:21.251226 | TASK [stage-output : Make all log files readable] 2025-09-04 01:14:21.510438 | orchestrator | ok 2025-09-04 01:14:21.520333 | 2025-09-04 01:14:21.520470 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-04 01:14:21.554927 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:21.569772 | 2025-09-04 01:14:21.569931 | TASK [stage-output : Discover log files for compression] 2025-09-04 01:14:21.594221 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:21.606207 | 2025-09-04 01:14:21.606369 | LOOP [stage-output : Archive everything from logs] 2025-09-04 01:14:21.646544 | 2025-09-04 01:14:21.646724 | PLAY [Post cleanup play] 2025-09-04 01:14:21.654577 | 2025-09-04 01:14:21.654692 | TASK [Set cloud fact (Zuul deployment)] 2025-09-04 01:14:21.707022 | orchestrator | ok 2025-09-04 01:14:21.716173 | 2025-09-04 01:14:21.716299 | TASK [Set cloud fact (local deployment)] 2025-09-04 01:14:21.739977 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:21.758144 | 2025-09-04 01:14:21.758332 | TASK [Clean the cloud environment] 2025-09-04 01:14:22.280680 | orchestrator | 2025-09-04 01:14:22 - clean up servers 2025-09-04 01:14:23.022562 | orchestrator | 2025-09-04 01:14:23 - testbed-manager 2025-09-04 01:14:23.103565 | orchestrator | 2025-09-04 01:14:23 - testbed-node-4 2025-09-04 01:14:23.195392 | orchestrator | 2025-09-04 01:14:23 - testbed-node-0 2025-09-04 01:14:23.279998 | orchestrator | 2025-09-04 01:14:23 - testbed-node-3 2025-09-04 01:14:23.380585 | orchestrator | 2025-09-04 01:14:23 - testbed-node-5 2025-09-04 01:14:23.476865 | orchestrator | 2025-09-04 01:14:23 - testbed-node-1 2025-09-04 01:14:23.566122 | orchestrator | 2025-09-04 01:14:23 - testbed-node-2 2025-09-04 01:14:23.643512 | orchestrator | 2025-09-04 01:14:23 - clean up keypairs 2025-09-04 01:14:23.660964 | orchestrator | 2025-09-04 01:14:23 - testbed 2025-09-04 01:14:23.686548 | orchestrator | 2025-09-04 01:14:23 - wait for servers to be gone 2025-09-04 01:14:34.490199 | orchestrator | 2025-09-04 01:14:34 - clean up ports 2025-09-04 01:14:34.722840 | orchestrator | 2025-09-04 01:14:34 - 1d0bd3a7-81f6-4403-a6ac-a6e59aa218a2 2025-09-04 01:14:34.960397 | orchestrator | 2025-09-04 01:14:34 - 21170a48-1eb9-4859-9ac5-28d4dc1b3160 2025-09-04 01:14:35.239217 | orchestrator | 2025-09-04 01:14:35 - 4e63dbcf-5550-44a7-907e-6dda2cba6b3f 2025-09-04 01:14:35.492089 | orchestrator | 2025-09-04 01:14:35 - 801a770a-0901-490c-bf26-a5a95184530f 2025-09-04 01:14:35.697271 | orchestrator | 2025-09-04 01:14:35 - 89d75ea8-0c30-4cd0-ab6e-15d1d63b99d7 2025-09-04 01:14:35.931390 | orchestrator | 2025-09-04 01:14:35 - 93c66f60-2b49-4f14-aa01-40df32abb045 2025-09-04 01:14:36.130460 | orchestrator | 2025-09-04 01:14:36 - c31e170a-a9de-40e5-9928-202e4d424176 2025-09-04 01:14:36.549257 | orchestrator | 2025-09-04 01:14:36 - clean up volumes 2025-09-04 01:14:36.664427 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-2-node-base 2025-09-04 01:14:36.703850 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-0-node-base 2025-09-04 01:14:36.745399 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-5-node-base 2025-09-04 01:14:36.795036 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-1-node-base 2025-09-04 01:14:36.840063 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-4-node-base 2025-09-04 01:14:36.881900 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-3-node-base 2025-09-04 01:14:36.924279 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-manager-base 2025-09-04 01:14:36.966850 | orchestrator | 2025-09-04 01:14:36 - testbed-volume-2-node-5 2025-09-04 01:14:37.002498 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-8-node-5 2025-09-04 01:14:37.041524 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-5-node-5 2025-09-04 01:14:37.085842 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-6-node-3 2025-09-04 01:14:37.128156 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-0-node-3 2025-09-04 01:14:37.169859 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-4-node-4 2025-09-04 01:14:37.215528 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-7-node-4 2025-09-04 01:14:37.253850 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-1-node-4 2025-09-04 01:14:37.296264 | orchestrator | 2025-09-04 01:14:37 - testbed-volume-3-node-3 2025-09-04 01:14:37.336506 | orchestrator | 2025-09-04 01:14:37 - disconnect routers 2025-09-04 01:14:37.460384 | orchestrator | 2025-09-04 01:14:37 - testbed 2025-09-04 01:14:38.441928 | orchestrator | 2025-09-04 01:14:38 - clean up subnets 2025-09-04 01:14:38.489719 | orchestrator | 2025-09-04 01:14:38 - subnet-testbed-management 2025-09-04 01:14:38.666756 | orchestrator | 2025-09-04 01:14:38 - clean up networks 2025-09-04 01:14:38.837085 | orchestrator | 2025-09-04 01:14:38 - net-testbed-management 2025-09-04 01:14:39.115167 | orchestrator | 2025-09-04 01:14:39 - clean up security groups 2025-09-04 01:14:39.155503 | orchestrator | 2025-09-04 01:14:39 - testbed-node 2025-09-04 01:14:39.301883 | orchestrator | 2025-09-04 01:14:39 - testbed-management 2025-09-04 01:14:39.415254 | orchestrator | 2025-09-04 01:14:39 - clean up floating ips 2025-09-04 01:14:39.450441 | orchestrator | 2025-09-04 01:14:39 - 81.163.193.146 2025-09-04 01:14:39.886517 | orchestrator | 2025-09-04 01:14:39 - clean up routers 2025-09-04 01:14:39.951899 | orchestrator | 2025-09-04 01:14:39 - testbed 2025-09-04 01:14:40.814802 | orchestrator | ok: Runtime: 0:00:18.717008 2025-09-04 01:14:40.819294 | 2025-09-04 01:14:40.819436 | PLAY RECAP 2025-09-04 01:14:40.819534 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-04 01:14:40.819582 | 2025-09-04 01:14:40.945121 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-04 01:14:40.946092 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-04 01:14:41.660846 | 2025-09-04 01:14:41.661001 | PLAY [Cleanup play] 2025-09-04 01:14:41.676420 | 2025-09-04 01:14:41.676540 | TASK [Set cloud fact (Zuul deployment)] 2025-09-04 01:14:41.734362 | orchestrator | ok 2025-09-04 01:14:41.745477 | 2025-09-04 01:14:41.745626 | TASK [Set cloud fact (local deployment)] 2025-09-04 01:14:41.780295 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:41.798485 | 2025-09-04 01:14:41.798635 | TASK [Clean the cloud environment] 2025-09-04 01:14:42.877423 | orchestrator | 2025-09-04 01:14:42 - clean up servers 2025-09-04 01:14:43.350125 | orchestrator | 2025-09-04 01:14:43 - clean up keypairs 2025-09-04 01:14:43.368019 | orchestrator | 2025-09-04 01:14:43 - wait for servers to be gone 2025-09-04 01:14:43.415123 | orchestrator | 2025-09-04 01:14:43 - clean up ports 2025-09-04 01:14:43.490940 | orchestrator | 2025-09-04 01:14:43 - clean up volumes 2025-09-04 01:14:43.553566 | orchestrator | 2025-09-04 01:14:43 - disconnect routers 2025-09-04 01:14:43.592584 | orchestrator | 2025-09-04 01:14:43 - clean up subnets 2025-09-04 01:14:43.616165 | orchestrator | 2025-09-04 01:14:43 - clean up networks 2025-09-04 01:14:43.795151 | orchestrator | 2025-09-04 01:14:43 - clean up security groups 2025-09-04 01:14:43.828511 | orchestrator | 2025-09-04 01:14:43 - clean up floating ips 2025-09-04 01:14:43.852188 | orchestrator | 2025-09-04 01:14:43 - clean up routers 2025-09-04 01:14:44.338358 | orchestrator | ok: Runtime: 0:00:01.331168 2025-09-04 01:14:44.342475 | 2025-09-04 01:14:44.342617 | PLAY RECAP 2025-09-04 01:14:44.342720 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-04 01:14:44.342769 | 2025-09-04 01:14:44.461885 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-04 01:14:44.464398 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-04 01:14:45.189212 | 2025-09-04 01:14:45.189390 | PLAY [Base post-fetch] 2025-09-04 01:14:45.204508 | 2025-09-04 01:14:45.204639 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-04 01:14:45.259754 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:45.281414 | 2025-09-04 01:14:45.281584 | TASK [fetch-output : Set log path for single node] 2025-09-04 01:14:45.327189 | orchestrator | ok 2025-09-04 01:14:45.335145 | 2025-09-04 01:14:45.335326 | LOOP [fetch-output : Ensure local output dirs] 2025-09-04 01:14:45.811904 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/logs" 2025-09-04 01:14:46.097873 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/artifacts" 2025-09-04 01:14:46.373679 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/61431fb139e44046ab6d513fa6ea3210/work/docs" 2025-09-04 01:14:46.393203 | 2025-09-04 01:14:46.393400 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-04 01:14:47.296709 | orchestrator | changed: .d..t...... ./ 2025-09-04 01:14:47.296962 | orchestrator | changed: All items complete 2025-09-04 01:14:47.297004 | 2025-09-04 01:14:48.076431 | orchestrator | changed: .d..t...... ./ 2025-09-04 01:14:48.815964 | orchestrator | changed: .d..t...... ./ 2025-09-04 01:14:48.847065 | 2025-09-04 01:14:48.847214 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-04 01:14:48.882371 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:48.884793 | orchestrator | skipping: Conditional result was False 2025-09-04 01:14:48.903001 | 2025-09-04 01:14:48.903124 | PLAY RECAP 2025-09-04 01:14:48.903206 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-04 01:14:48.903267 | 2025-09-04 01:14:49.030706 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-04 01:14:49.031763 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-04 01:14:49.770178 | 2025-09-04 01:14:49.770406 | PLAY [Base post] 2025-09-04 01:14:49.785813 | 2025-09-04 01:14:49.785946 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-04 01:14:50.737057 | orchestrator | changed 2025-09-04 01:14:50.747184 | 2025-09-04 01:14:50.747329 | PLAY RECAP 2025-09-04 01:14:50.747405 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-04 01:14:50.747480 | 2025-09-04 01:14:50.868919 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-04 01:14:50.870751 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-04 01:14:51.640205 | 2025-09-04 01:14:51.640392 | PLAY [Base post-logs] 2025-09-04 01:14:51.650743 | 2025-09-04 01:14:51.650900 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-04 01:14:52.105439 | localhost | changed 2025-09-04 01:14:52.127130 | 2025-09-04 01:14:52.127316 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-04 01:14:52.168985 | localhost | ok 2025-09-04 01:14:52.175299 | 2025-09-04 01:14:52.175461 | TASK [Set zuul-log-path fact] 2025-09-04 01:14:52.194798 | localhost | ok 2025-09-04 01:14:52.207987 | 2025-09-04 01:14:52.208117 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-04 01:14:52.243133 | localhost | ok 2025-09-04 01:14:52.246551 | 2025-09-04 01:14:52.246663 | TASK [upload-logs : Create log directories] 2025-09-04 01:14:52.739686 | localhost | changed 2025-09-04 01:14:52.742550 | 2025-09-04 01:14:52.742659 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-04 01:14:53.231911 | localhost -> localhost | ok: Runtime: 0:00:00.007021 2025-09-04 01:14:53.240052 | 2025-09-04 01:14:53.240304 | TASK [upload-logs : Upload logs to log server] 2025-09-04 01:14:53.807398 | localhost | Output suppressed because no_log was given 2025-09-04 01:14:53.811648 | 2025-09-04 01:14:53.811834 | LOOP [upload-logs : Compress console log and json output] 2025-09-04 01:14:53.866512 | localhost | skipping: Conditional result was False 2025-09-04 01:14:53.871803 | localhost | skipping: Conditional result was False 2025-09-04 01:14:53.884140 | 2025-09-04 01:14:53.884372 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-04 01:14:53.929908 | localhost | skipping: Conditional result was False 2025-09-04 01:14:53.930696 | 2025-09-04 01:14:53.933490 | localhost | skipping: Conditional result was False 2025-09-04 01:14:53.947871 | 2025-09-04 01:14:53.948131 | LOOP [upload-logs : Upload console log and json output]